ANGO: A Next-level Evaluation Benchmark For Generation-oriented Language Models In Chinese Domain · The Large Language Model Bible Contribute to LLM-Bible

ANGO: A Next-level Evaluation Benchmark For Generation-oriented Language Models In Chinese Domain

Wang Bingchao. Arxiv 2024

[Paper]    
Interpretability And Explainability RAG Reinforcement Learning Tools Training Techniques Uncategorized

Recently, various Large Language Models (LLMs) evaluation datasets have emerged, but most of them have issues with distorted rankings and difficulty in model capabilities analysis. Addressing these concerns, this paper introduces ANGO, a Chinese multi-choice question evaluation benchmark. ANGO proposes Keypoint categorization standard for the first time, each question in ANGO can correspond to multiple keypoints, effectively enhancing interpretability of evaluation results. Base on performance of real humans, we build a quantifiable question difficulty standard and divide ANGO questions into 9 difficulty levels, which provide more precise guidance for model training. To minimize data leakage impact and fully leverage ANGO’s innovative features, we have engineered exclusive sampling strategies and a new evaluation framework that support swift testset iteration. Our experiments demonstrate that ANGO poses a stronger challenge to models and reveals more details in evaluation result compared to existing benchmarks.

Similar Work