Tokenshap: Interpreting Large Language Models With Monte Carlo Shapley Value Estimation · The Large Language Model Bible Contribute to LLM-Bible

Tokenshap: Interpreting Large Language Models With Monte Carlo Shapley Value Estimation

Goldshmidt Roni, Horovicz Miriam. Arxiv 2024

[Paper]    
Applications Efficiency And Optimization Ethics And Bias Interpretability And Explainability Model Architecture Prompting RAG Reinforcement Learning Responsible AI Tools

As large language models (LLMs) become increasingly prevalent in critical applications, the need for interpretable AI has grown. We introduce TokenSHAP, a novel method for interpreting LLMs by attributing importance to individual tokens or substrings within input prompts. This approach adapts Shapley values from cooperative game theory to natural language processing, offering a rigorous framework for understanding how different parts of an input contribute to a model’s response. TokenSHAP leverages Monte Carlo sampling for computational efficiency, providing interpretable, quantitative measures of token importance. We demonstrate its efficacy across diverse prompts and LLM architectures, showing consistent improvements over existing baselines in alignment with human judgments, faithfulness to model behavior, and consistency. Our method’s ability to capture nuanced interactions between tokens provides valuable insights into LLM behavior, enhancing model transparency, improving prompt engineering, and aiding in the development of more reliable AI systems. TokenSHAP represents a significant step towards the necessary interpretability for responsible AI deployment, contributing to the broader goal of creating more transparent, accountable, and trustworthy AI systems.

Similar Work