Investigating Context Effects In Similarity Judgements In Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Investigating Context Effects In Similarity Judgements In Large Language Models

Uprety Sagar, Jaiswal Amit Kumar, Liu Haiming, Song Dawei. Arxiv 2024

[Paper]    
Agentic Applications Ethics And Bias Reinforcement Learning

Large Language Models (LLMs) have revolutionised the capability of AI models in comprehending and generating natural language text. They are increasingly being used to empower and deploy agents in real-world scenarios, which make decisions and take actions based on their understanding of the context. Therefore researchers, policy makers and enterprises alike are working towards ensuring that the decisions made by these agents align with human values and user expectations. That being said, human values and decisions are not always straightforward to measure and are subject to different cognitive biases. There is a vast section of literature in Behavioural Science which studies biases in human judgements. In this work we report an ongoing investigation on alignment of LLMs with human judgements affected by order bias. Specifically, we focus on a famous human study which showed evidence of order effects in similarity judgements, and replicate it with various popular LLMs. We report the different settings where LLMs exhibit human-like order effect bias and discuss the implications of these findings to inform the design and development of LLM based applications.

Similar Work