Considers-the-human Evaluation Framework: Rethinking Human Evaluation For Generative Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Considers-the-human Evaluation Framework: Rethinking Human Evaluation For Generative Large Language Models

Elangovan Aparna, Liu Ling, Xu Lei, Bodapati Sravan, Roth Dan. Arxiv 2024

[Paper]    
Ethics And Bias Tools

In this position paper, we argue that human evaluation of generative large language models (LLMs) should be a multidisciplinary undertaking that draws upon insights from disciplines such as user experience research and human behavioral psychology to ensure that the experimental design and results are reliable. The conclusions from these evaluations, thus, must consider factors such as usability, aesthetics, and cognitive biases. We highlight how cognitive biases can conflate fluent information and truthfulness, and how cognitive uncertainty affects the reliability of rating scores such as Likert. Furthermore, the evaluation should differentiate the capabilities and weaknesses of increasingly powerful large language models – which requires effective test sets. The scalability of human evaluation is also crucial to wider adoption. Hence, to design an effective human evaluation system in the age of generative NLP, we propose the ConSiDERS-The-Human evaluation framework consisting of 6 pillars – Consistency, Scoring Criteria, Differentiating, User Experience, Responsible, and Scalability.

Similar Work