Meaningful Learning: Advancing Abstract Reasoning In Large Language Models Via Generic Fact Guidance · The Large Language Model Bible Contribute to LLM-Bible

Meaningful Learning: Advancing Abstract Reasoning In Large Language Models Via Generic Fact Guidance

Xiong Kai, Ding Xiao, Liu Ting, Qin Bing, Xu Dongliang, Yang Qing, Liu Hongtao, Cao Yixin. Arxiv 2024

[Paper]    
Interpretability And Explainability RAG Reinforcement Learning

Large language models (LLMs) have developed impressive performance and strong explainability across various reasoning scenarios, marking a significant stride towards mimicking human-like intelligence. Despite this, when tasked with simple questions supported by a generic fact, LLMs often fail to provide consistent and precise answers, indicating a deficiency in abstract reasoning abilities. This has sparked a vigorous debate about whether LLMs are genuinely reasoning or merely memorizing. In light of this, we design a preliminary study to quantify and delve into the abstract reasoning abilities of existing LLMs. Our findings reveal a substantial discrepancy between their general reasoning and abstract reasoning performances. To relieve this problem, we tailor an abstract reasoning dataset (AbsR) together with a meaningful learning paradigm to teach LLMs how to leverage generic facts for reasoning purposes. The results show that our approach not only boosts the general reasoning performance of LLMs but also makes considerable strides towards their capacity for abstract reasoning, moving beyond simple memorization or imitation to a more nuanced understanding and application of generic facts.

Similar Work