ANALOGYKB: Unlocking Analogical Reasoning Of Language Models With A Million-scale Knowledge Base · The Large Language Model Bible Contribute to LLM-Bible

ANALOGYKB: Unlocking Analogical Reasoning Of Language Models With A Million-scale Knowledge Base

Yuan Siyu, Chen Jiangjie, Sun Changzhi, Liang Jiaqing, Xiao Yanghua, Yang Deqing. Arxiv 2023

[Paper]    
Applications Training Techniques

Analogical reasoning is a fundamental cognitive ability of humans. However, current language models (LMs) still struggle to achieve human-like performance in analogical reasoning tasks due to a lack of resources for model training. In this work, we address this gap by proposing ANALOGYKB, a million-scale analogy knowledge base (KB) derived from existing knowledge graphs (KGs). ANALOGYKB identifies two types of analogies from the KGs: 1) analogies of the same relations, which can be directly extracted from the KGs, and 2) analogies of analogous relations, which are identified with a selection and filtering pipeline enabled by large language models (LLMs), followed by minor human efforts for data quality control. Evaluations on a series of datasets of two analogical reasoning tasks (analogy recognition and generation) demonstrate that ANALOGYKB successfully enables both smaller LMs and LLMs to gain better analogical reasoning capabilities.

Similar Work