Behavioral Testing: Can Large Language Models Implicitly Resolve Ambiguous Entities? · The Large Language Model Bible Contribute to LLM-Bible

Behavioral Testing: Can Large Language Models Implicitly Resolve Ambiguous Entities?

Sedova Anastasiia, Litschko Robert, Frassinelli Diego, Roth Benjamin, Plank Barbara. Arxiv 2024

[Paper]    
Ethics And Bias Prompting Reinforcement Learning Training Techniques

One of the major aspects contributing to the striking performance of large language models (LLMs) is the vast amount of factual knowledge accumulated during pre-training. Yet, many LLMs suffer from self-inconsistency, which raises doubts about their trustworthiness and reliability. In this paper, we focus on entity type ambiguity and analyze current state-of-the-art LLMs for their proficiency and consistency in applying their factual knowledge when prompted for entities under ambiguity. To do so, we propose an evaluation protocol that disentangles knowing from applying knowledge, and test state-of-the-art LLMs on 49 entities. Our experiments reveal that LLMs perform poorly with ambiguous prompts, achieving only 80% accuracy. Our results further demonstrate systematic discrepancies in LLM behavior and their failure to consistently apply information, indicating that the models can exhibit knowledge without being able to utilize it, significant biases for preferred readings, as well as self inconsistencies. Our study highlights the importance of handling entity ambiguity in future for more trustworthy LLMs

Similar Work