Beyond Benchmarking: A New Paradigm For Evaluation And Assessment Of Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Beyond Benchmarking: A New Paradigm For Evaluation And Assessment Of Large Language Models

Liu Jin, Li Qingquan, Du Wenlong. Arxiv 2024

[Paper]    
Efficiency And Optimization Uncategorized

In current benchmarks for evaluating large language models (LLMs), there are issues such as evaluation content restriction, untimely updates, and lack of optimization guidance. In this paper, we propose a new paradigm for the measurement of LLMs: Benchmarking-Evaluation-Assessment. Our paradigm shifts the “location” of LLM evaluation from the “examination room” to the “hospital”. Through conducting a “physical examination” on LLMs, it utilizes specific task-solving as the evaluation content, performs deep attribution of existing problems within LLMs, and provides recommendation for optimization.

Similar Work