Prepair: Pointwise Reasoning Enhance Pairwise Evaluating For Robust Instruction-following Assessments · The Large Language Model Bible Contribute to LLM-Bible

Prepair: Pointwise Reasoning Enhance Pairwise Evaluating For Robust Instruction-following Assessments

Jeong Hawon, Park Chaehun, Hong Jimin, Choo Jaegul. Arxiv 2024

[Paper]    
Ethics And Bias Security

Pairwise evaluation using large language models (LLMs) is widely used for evaluating natural language generation (NLG) tasks. However, the reliability of LLMs is often compromised by biases, such as favoring verbosity and authoritative tone. In the study, we focus on the comparison of two LLM-based evaluation approaches, pointwise and pairwise. Our findings demonstrate that pointwise evaluators exhibit more robustness against undesirable preferences. Further analysis reveals that pairwise evaluators can accurately identify the shortcomings of low-quality outputs even when their judgment is incorrect. These results indicate that LLMs are more severely influenced by their bias in a pairwise evaluation setup. To mitigate this, we propose a hybrid method that integrates pointwise reasoning into pairwise evaluation. Experimental results show that our method enhances the robustness of pairwise evaluators against adversarial samples while preserving accuracy on normal samples.

Similar Work