From Words To Worlds: Compositionality For Cognitive Architectures · The Large Language Model Bible Contribute to LLM-Bible

From Words To Worlds: Compositionality For Cognitive Architectures

Dhar Ruchira, Søgaard Anders. Arxiv 2024

[Paper]    
Model Architecture Reinforcement Learning

Large language models (LLMs) are very performant connectionist systems, but do they exhibit more compositionality? More importantly, is that part of why they perform so well? We present empirical analyses across four LLM families (12 models) and three task categories, including a novel task introduced below. Our findings reveal a nuanced relationship in learning of compositional strategies by LLMs – while scaling enhances compositional abilities, instruction tuning often has a reverse effect. Such disparity brings forth some open issues regarding the development and improvement of large language models in alignment with human cognitive capacities.

Similar Work