Cabbage Sweeter Than Cake? Analysing The Potential Of Large Language Models For Learning Conceptual Spaces · The Large Language Model Bible Contribute to LLM-Bible

Cabbage Sweeter Than Cake? Analysing The Potential Of Large Language Models For Learning Conceptual Spaces

Chatterjee Usashi, Gajbhiye Amit, Schockaert Steven. Arxiv 2023

[Paper]    
Applications BERT GPT Model Architecture RAG Tools

The theory of Conceptual Spaces is an influential cognitive-linguistic framework for representing the meaning of concepts. Conceptual spaces are constructed from a set of quality dimensions, which essentially correspond to primitive perceptual features (e.g. hue or size). These quality dimensions are usually learned from human judgements, which means that applications of conceptual spaces tend to be limited to narrow domains (e.g. modelling colour or taste). Encouraged by recent findings about the ability of Large Language Models (LLMs) to learn perceptually grounded representations, we explore the potential of such models for learning conceptual spaces. Our experiments show that LLMs can indeed be used for learning meaningful representations to some extent. However, we also find that fine-tuned models of the BERT family are able to match or even outperform the largest GPT-3 model, despite being 2 to 3 orders of magnitude smaller.

Similar Work