Balancing Rigor And Utility: Mitigating Cognitive Biases In Large Language Models For Multiple-choice Questions · The Large Language Model Bible Contribute to LLM-Bible

Balancing Rigor And Utility: Mitigating Cognitive Biases In Large Language Models For Multiple-choice Questions

Wang Liman, Zhong Hanyang, Cao Wenting, Sun Zeyuan. Arxiv 2024

[Paper]    
Applications Efficiency And Optimization Ethics And Bias RAG Reinforcement Learning

This paper examines the role of cognitive biases in the decision-making processes of large language models (LLMs), challenging the conventional goal of eliminating all biases. We show that certain cognitive biases when properly balanced, can enhance decision-making efficiency through rational deviations and heuristic shortcuts. By introducing heuristic moderation and an abstention option, which allows LLMs to withhold responses when uncertain, we reduce error rates, improve decision accuracy, and optimize decision rates. Using the Balance Rigor and Utility (BRU) dataset, developed through expert collaboration, our findings demonstrate that targeted inspection of cognitive biases aligns LLM decisions more closely with human reasoning, enhancing reliability and suggesting strategies for future improvements. This approach offers a novel way to leverage cognitive biases to improve the practical utility of LLMs across various applications.

Similar Work