Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona Biases In Dialogue Systems · The Large Language Model Bible Contribute to LLM-Bible

Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona Biases In Dialogue Systems

Wan Yixin, Zhao Jieyu, Chadha Aman, Peng Nanyun, Chang Kai-wei. Arxiv 2023

[Paper]    
Agentic Applications Ethics And Bias GPT Model Architecture Tools

Recent advancements in Large Language Models empower them to follow freeform instructions, including imitating generic or specific demographic personas in conversations. We define generic personas to represent demographic groups, such as “an Asian person”, whereas specific personas may take the form of specific popular Asian names like “Yumi”. While the adoption of personas enriches user experiences by making dialogue systems more engaging and approachable, it also casts a shadow of potential risk by exacerbating social biases within model responses, thereby causing societal harm through interactions with users. In this paper, we systematically study “persona biases”, which we define to be the sensitivity of dialogue models’ harmful behaviors contingent upon the personas they adopt. We categorize persona biases into biases in harmful expression and harmful agreement, and establish a comprehensive evaluation framework to measure persona biases in five aspects: Offensiveness, Toxic Continuation, Regard, Stereotype Agreement, and Toxic Agreement. Additionally, we propose to investigate persona biases by experimenting with UNIVERSALPERSONA, a systematically constructed persona dataset encompassing various types of both generic and specific model personas. Through benchmarking on four different models – including Blender, ChatGPT, Alpaca, and Vicuna – our study uncovers significant persona biases in dialogue systems. Our findings also underscore the pressing need to revisit the use of personas in dialogue agents to ensure safe application.

Similar Work