Unibias: Unveiling And Mitigating LLM Bias Through Internal Attention And FFN Manipulation · The Large Language Model Bible Contribute to LLM-Bible

Unibias: Unveiling And Mitigating LLM Bias Through Internal Attention And FFN Manipulation

Zhou Hanzhang, Feng Zijian, Zhu Zixiao, Qian Junlang, Mao Kezhi. Arxiv 2024

[Paper]    
Attention Mechanism Ethics And Bias In Context Learning Model Architecture Prompting Reinforcement Learning

Large language models (LLMs) have demonstrated impressive capabilities in various tasks using the in-context learning (ICL) paradigm. However, their effectiveness is often compromised by inherent bias, leading to prompt brittleness, i.e., sensitivity to design settings such as example selection, order, and prompt formatting. Previous studies have addressed LLM bias through external adjustment of model outputs, but the internal mechanisms that lead to such bias remain unexplored. Our work delves into these mechanisms, particularly investigating how feedforward neural networks (FFNs) and attention heads result in the bias of LLMs. By Interpreting the contribution of individual FFN vectors and attention heads, we identify the biased LLM components that skew LLMs’ prediction toward specific labels. To mitigate these biases, we introduce UniBias, an inference-only method that effectively identifies and eliminates biased FFN vectors and attention heads. Extensive experiments across 12 NLP datasets demonstrate that UniBias significantly enhances ICL performance and alleviates prompt brittleness of LLMs.

Similar Work