Storyanalogy: Deriving Story-level Analogies From Large Language Models To Unlock Analogical Understanding · The Large Language Model Bible Contribute to LLM-Bible

Storyanalogy: Deriving Story-level Analogies From Large Language Models To Unlock Analogical Understanding

Jiayang Cheng, Qiu Lin, Chan Tsz Ho, Fang Tianqing, Wang Weiqi, Chan Chunkit, Ru Dongyu, Guo Qipeng, Zhang Hongming, Song Yangqiu, Zhang Yue, Zhang Zheng. Arxiv 2023

[Paper]    
GPT Model Architecture

Analogy-making between narratives is crucial for human reasoning. In this paper, we evaluate the ability to identify and generate analogies by constructing a first-of-its-kind large-scale story-level analogy corpus, \textsc{StoryAnalogy}, which contains 24K story pairs from diverse domains with human annotations on two similarities from the extended Structure-Mapping Theory. We design a set of tests on \textsc{StoryAnalogy}, presenting the first evaluation of story-level analogy identification and generation. Interestingly, we find that the analogy identification tasks are incredibly difficult not only for sentence embedding models but also for the recent large language models (LLMs) such as ChatGPT and LLaMa. ChatGPT, for example, only achieved around 30% accuracy in multiple-choice questions (compared to over 85% accuracy for humans). Furthermore, we observe that the data in \textsc{StoryAnalogy} can improve the quality of analogy generation in LLMs, where a fine-tuned FlanT5-xxl model achieves comparable performance to zero-shot ChatGPT.

Similar Work