Biasalert: A Plug-and-play Tool For Social Bias Detection In Llms · The Large Language Model Bible Contribute to LLM-Bible

Biasalert: A Plug-and-play Tool For Social Bias Detection In Llms

Fan Zhiting, Chen Ruizhe, Xu Ruiling, Liu Zuozhu. Arxiv 2024

[Paper]    
Applications Bias Mitigation Ethics And Bias GPT Language Modeling Model Architecture Tools

Evaluating the bias in Large Language Models (LLMs) becomes increasingly crucial with their rapid development. However, existing evaluation methods rely on fixed-form outputs and cannot adapt to the flexible open-text generation scenarios of LLMs (e.g., sentence completion and question answering). To address this, we introduce BiasAlert, a plug-and-play tool designed to detect social bias in open-text generations of LLMs. BiasAlert integrates external human knowledge with inherent reasoning capabilities to detect bias reliably. Extensive experiments demonstrate that BiasAlert significantly outperforms existing state-of-the-art methods like GPT4-as-A-Judge in detecting bias. Furthermore, through application studies, we demonstrate the utility of BiasAlert in reliable LLM bias evaluation and bias mitigation across various scenarios. Model and code will be publicly released.

Similar Work