Gameeval: Evaluating Llms On Conversational Games · The Large Language Model Bible Contribute to LLM-Bible

Gameeval: Evaluating Llms On Conversational Games

Qiao Dan, Wu Chenfei, Liang Yaobo, Li Juntao, Duan Nan. Arxiv 2023

[Paper] [Code]    
Applications Ethics And Bias Has Code Security Tools

The rapid advancements in large language models (LLMs) have presented challenges in evaluating those models. Existing evaluation methods are either reference-based or preference based, which inevitably need human intervention or introduce test bias caused by evaluator models. In this paper, we propose GameEval, a novel approach to evaluating LLMs through goal-driven conversational games, overcoming the limitations of previous methods. GameEval treats LLMs as game players and assigns them distinct roles with specific goals achieved by launching conversations of various forms, including discussion, question answering, and voting. We design three unique games with cooperative or adversarial objectives, accompanied by corresponding evaluation metrics, to show how this new paradigm comprehensively evaluates model performance.Through extensive experiments, we show that GameEval can effectively differentiate the capabilities of various LLMs, providing a comprehensive assessment of their integrated abilities to solve complex problems. Our public anonymous code is available at https://github.com/GameEval/GameEval.

Similar Work