The Importance Of Prompt Tuning For Automated Neuron Explanations · The Large Language Model Bible Contribute to LLM-Bible

The Importance Of Prompt Tuning For Automated Neuron Explanations

Lee Justin, Oikarinen Tuomas, Chatha Arjun, Chang Keng-chi, Chen Yilan, Weng Tsui-wei. Arxiv 2023

[Paper]    
GPT Interpretability And Explainability Model Architecture Prompting Responsible AI Uncategorized

Recent advances have greatly increased the capabilities of large language models (LLMs), but our understanding of the models and their safety has not progressed as fast. In this paper we aim to understand LLMs deeper by studying their individual neurons. We build upon previous work showing large language models such as GPT-4 can be useful in explaining what each neuron in a language model does. Specifically, we analyze the effect of the prompt used to generate explanations and show that reformatting the explanation prompt in a more natural way can significantly improve neuron explanation quality and greatly reduce computational cost. We demonstrate the effects of our new prompts in three different ways, incorporating both automated and human evaluations.

Similar Work