Cognitive Bias In High-stakes Decision-making With Llms · The Large Language Model Bible Contribute to LLM-Bible

Cognitive Bias In High-stakes Decision-making With Llms

Echterhoff Jessica, Liu Yao, Alessa Abeer, Mcauley Julian, He Zexue. Arxiv 2024

[Paper]    
Bias Mitigation Ethics And Bias Prompting Reinforcement Learning Tools Training Techniques

Large language models (LLMs) offer significant potential as tools to support an expanding range of decision-making tasks. Given their training on human (created) data, LLMs have been shown to inherit societal biases against protected groups, as well as be subject to bias functionally resembling cognitive bias. Human-like bias can impede fair and explainable decisions made with LLM assistance. Our work introduces BiasBuster, a framework designed to uncover, evaluate, and mitigate cognitive bias in LLMs, particularly in high-stakes decision-making tasks. Inspired by prior research in psychology and cognitive science, we develop a dataset containing 16,800 prompts to evaluate different cognitive biases (e.g., prompt-induced, sequential, inherent). We test various bias mitigation strategies, amidst proposing a novel method utilising LLMs to debias their own prompts. Our analysis provides a comprehensive picture of the presence and effects of cognitive bias across commercial and open-source models. We demonstrate that our self-help debiasing effectively mitigates model answers that display patterns akin to human cognitive bias without having to manually craft examples for each bias.

Similar Work