An Investigation Of Language Model Interpretability Via Sentence Editing · The Large Language Model Bible Contribute to LLM-Bible

An Investigation Of Language Model Interpretability Via Sentence Editing

Stevens Samuel, Su Yu. Arxiv 2020

[Paper] [Code]    
Attention Mechanism BERT Has Code Interpretability And Explainability Model Architecture Training Techniques

Pre-trained language models (PLMs) like BERT are being used for almost all language-related tasks, but interpreting their behavior still remains a significant challenge and many important questions remain largely unanswered. In this work, we re-purpose a sentence editing dataset, where faithful high-quality human rationales can be automatically extracted and compared with extracted model rationales, as a new testbed for interpretability. This enables us to conduct a systematic investigation on an array of questions regarding PLMs’ interpretability, including the role of pre-training procedure, comparison of rationale extraction methods, and different layers in the PLM. The investigation generates new insights, for example, contrary to the common understanding, we find that attention weights correlate well with human rationales and work better than gradient-based saliency in extracting model rationales. Both the dataset and code are available at https://github.com/samuelstevens/sentence-editing-interpretability to facilitate future interpretability research.

Similar Work