On The Tip Of The Tongue: Analyzing Conceptual Representation In Large Language Models With Reverse-dictionary Probe · The Large Language Model Bible Contribute to LLM-Bible

On The Tip Of The Tongue: Analyzing Conceptual Representation In Large Language Models With Reverse-dictionary Probe

Xu Ningyu, Zhang Qi, Zhang Menghan, Qian Peng, Huang Xuanjing. Arxiv 2024

[Paper]    
Fine Tuning Prompting

Probing and enhancing large language models’ reasoning capacity remains a crucial open question. Here we re-purpose the reverse dictionary task as a case study to probe LLMs’ capacity for conceptual inference. We use in-context learning to guide the models to generate the term for an object concept implied in a linguistic description. Models robustly achieve high accuracy in this task, and their representation space encodes information about object categories and fine-grained features. Further experiments suggest that the conceptual inference ability as probed by the reverse-dictionary task predicts model’s general reasoning performance across multiple benchmarks, despite similar syntactic generalization behaviors across models. Explorative analyses suggest that prompting LLMs with description\(\Rightarrow\)word examples may induce generalization beyond surface-level differences in task construals and facilitate models on broader commonsense reasoning problems.

Similar Work