Psychologically-informed Chain-of-thought Prompts For Metaphor Understanding In Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Psychologically-informed Chain-of-thought Prompts For Metaphor Understanding In Large Language Models

Prystawski Ben, Thibodeau Paul, Potts Christopher, Goodman Noah D.. Arxiv 2022

[Paper]    
GPT Interpretability And Explainability Model Architecture Prompting Tools

Probabilistic models of language understanding are valuable tools for investigating human language use. However, they need to be hand-designed for a particular domain. In contrast, large language models (LLMs) are trained on text that spans a wide array of domains, but they lack the structure and interpretability of probabilistic models. In this paper, we use chain-of-thought prompts to introduce structures from probabilistic models into LLMs. We explore this approach in the case of metaphor understanding. Our chain-of-thought prompts lead language models to infer latent variables and reason about their relationships in order to choose appropriate paraphrases for metaphors. The latent variables and relationships chosen are informed by theories of metaphor understanding from cognitive psychology. We apply these prompts to the two largest versions of GPT-3 and show that they can improve performance in a paraphrase selection task.

Similar Work