Reference-guided Verdict: Llms-as-judges In Automatic Evaluation Of Free-form Text · The Large Language Model Bible Contribute to LLM-Bible

Reference-guided Verdict: Llms-as-judges In Automatic Evaluation Of Free-form Text

Badshah Sher, Sajjad Hassan. Arxiv 2024

[Paper]    
RAG Reinforcement Learning Uncategorized

The emergence of Large Language Models (LLMs) as chat assistants capable of generating human-like conversations has amplified the need for robust evaluation methods, particularly for open-ended tasks. Conventional metrics like BLEU and ROUGE, while useful, are increasingly inadequate for capturing the subtle semantics and contextual richness of such generative outputs. We propose a reference-guided verdict method that automates the evaluation process by leveraging multiple LLMs-as-judges. Through experiments on three open-ended question-answering tasks, we demonstrate that combining multiple LLMs-as-judges significantly improves the reliability and accuracy of evaluations, particularly in complex tasks where a single model might struggle. Our findings reveal a strong correlation with human evaluations, establishing our method as a viable and effective alternative to traditional metrics and human judgments, particularly in the context of LLM-based chat assistants where the complexity and diversity of responses challenge existing benchmarks.

Similar Work