Scientific And Creative Analogies In Pretrained Language Models · The Large Language Model Bible Contribute to LLM-Bible

Scientific And Creative Analogies In Pretrained Language Models

Czinczoll Tamara, Yannakoudakis Helen, Mishra Pushkar, Shutova Ekaterina. Arxiv 2022

[Paper]    
BERT GPT Model Architecture Uncategorized

This paper examines the encoding of analogy in large-scale pretrained language models, such as BERT and GPT-2. Existing analogy datasets typically focus on a limited set of analogical relations, with a high similarity of the two domains between which the analogy holds. As a more realistic setup, we introduce the Scientific and Creative Analogy dataset (SCAN), a novel analogy dataset containing systematic mappings of multiple attributes and relational structures across dissimilar domains. Using this dataset, we test the analogical reasoning capabilities of several widely-used pretrained language models (LMs). We find that state-of-the-art LMs achieve low performance on these complex analogy tasks, highlighting the challenges still posed by analogy understanding.

Similar Work