The Unreliability Of Explanations In Few-shot Prompting For Textual Reasoning · The Large Language Model Bible Contribute to LLM-Bible

The Unreliability Of Explanations In Few-shot Prompting For Textual Reasoning

Xi Ye, Greg Durrett. Arxiv 2022

[Paper]    
Applications Few Shot GPT In Context Learning Interpretability And Explainability Model Architecture Prompting

Does prompting a large language model (LLM) like GPT-3 with explanations improve in-context learning? We study this question on two NLP tasks that involve reasoning over text, namely question answering and natural language inference. We test the performance of four LLMs on three textual reasoning datasets using prompts that include explanations in multiple different styles. For these tasks, we find that including explanations in the prompts for OPT, GPT-3 (davinci), and InstructGPT (text-davinci-001) only yields small to moderate accuracy improvements over standard few-show learning. However, text-davinci-002 is able to benefit more substantially. We further show that explanations generated by the LLMs may not entail the models’ predictions nor be factually grounded in the input, even on simple tasks with extractive explanations. However, these flawed explanations can still be useful as a way to verify LLMs’ predictions post-hoc. Through analysis in our three settings, we show that explanations judged by humans to be good–logically consistent with the input and the prediction–more likely cooccur with accurate predictions. Following these observations, we train calibrators using automatically extracted scores that assess the reliability of explanations, allowing us to improve performance post-hoc across all of our datasets.

Similar Work