Sparsity-guided Holistic Explanation For Llms With Interpretable Inference-time Intervention · The Large Language Model Bible Contribute to LLM-Bible

Sparsity-guided Holistic Explanation For Llms With Interpretable Inference-time Intervention

Tan Zhen, Chen Tianlong, Zhang Zhenyu, Liu Huan. Arxiv 2023

[Paper]    
Applications Attention Mechanism Interpretability And Explainability Model Architecture Reinforcement Learning Tools

Large Language Models (LLMs) have achieved unprecedented breakthroughs in various natural language processing domains. However, the enigmatic ``black-box’’ nature of LLMs remains a significant challenge for interpretability, hampering transparent and accountable applications. While past approaches, such as attention visualization, pivotal subnetwork extraction, and concept-based analyses, offer some insight, they often focus on either local or global explanations within a single dimension, occasionally falling short in providing comprehensive clarity. In response, we propose a novel methodology anchored in sparsity-guided techniques, aiming to provide a holistic interpretation of LLMs. Our framework, termed SparseCBM, innovatively integrates sparsity to elucidate three intertwined layers of interpretation: input, subnetwork, and concept levels. In addition, the newly introduced dimension of interpretable inference-time intervention facilitates dynamic adjustments to the model during deployment. Through rigorous empirical evaluations on real-world datasets, we demonstrate that SparseCBM delivers a profound understanding of LLM behaviors, setting it apart in both interpreting and ameliorating model inaccuracies. Codes are provided in supplements.

Similar Work