Evaluating The Reliability Of Self-explanations In Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Evaluating The Reliability Of Self-explanations In Large Language Models

Randl Korbinian, Pavlopoulos John, Henriksson Aron, Lindgren Tony. Arxiv 2024

[Paper]    
Interpretability And Explainability Prompting

This paper investigates the reliability of explanations generated by large language models (LLMs) when prompted to explain their previous output. We evaluate two kinds of such self-explanations - extractive and counterfactual - using three state-of-the-art LLMs (2B to 8B parameters) on two different classification tasks (objective and subjective). Our findings reveal, that, while these self-explanations can correlate with human judgement, they do not fully and accurately follow the model’s decision process, indicating a gap between perceived and actual model reasoning. We show that this gap can be bridged because prompting LLMs for counterfactual explanations can produce faithful, informative, and easy-to-verify results. These counterfactuals offer a promising alternative to traditional explainability methods (e.g. SHAP, LIME), provided that prompts are tailored to specific tasks and checked for validity.

Similar Work