Grade Score: Quantifying LLM Performance In Option Selection · The Large Language Model Bible Contribute to LLM-Bible

Grade Score: Quantifying LLM Performance In Option Selection

Iourovitski Dmitri. Arxiv 2024

[Paper] [Code]    
Applications Bias Mitigation Ethics And Bias Fairness Has Code Prompting RAG

This study introduces the “Grade Score”, a novel metric designed to evaluate the consistency and fairness of Large Language Models (LLMs) when used as multiple-choice judges with respect to order bias and choice consistency. The Grade Score combines Entropy, which measures order bias, and Mode Frequency, which assesses choice stability, offering insights into LLMs’ reliability and impartiality. The study explores techniques such as prompt engineering and option sampling strategies to optimize the Grade Score, demonstrating their effectiveness in enhancing LLMs’ performance. Results showcase varying performances among LLMs with respect to prompts and highlight the positive impact of including irrelevant options. The study also identifies an emergent behavior in instruction-following models, where they adapt to instructions targeting specific biases, demonstrating their adaptability. The Grade Score facilitates comparisons between LLMs and encourages ongoing research towards optimizing their decision-making processes, with potential implications for improving their reliability and fairness in various applications. All code is available on GitHub https://github.com/IoDmitri/GradeLab

Similar Work