INSTRUCTEVAL: Towards Holistic Evaluation Of Instruction-tuned Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

INSTRUCTEVAL: Towards Holistic Evaluation Of Instruction-tuned Large Language Models

Chia Yew Ken, Hong Pengfei, Bing Lidong, Poria Soujanya. Arxiv 2023

[Paper] [Code]    
Agentic Applications Fine Tuning GPT Has Code Model Architecture Pretraining Methods RAG Reinforcement Learning Tools Training Techniques

Instruction-tuned large language models have revolutionized natural language processing and have shown great potential in applications such as conversational agents. These models, such as GPT-4, can not only master language but also solve complex tasks in areas like mathematics, coding, medicine, and law. Despite their impressive capabilities, there is still a lack of comprehensive understanding regarding their full potential, primarily due to the black-box nature of many models and the absence of holistic evaluation studies. To address these challenges, we present INSTRUCTEVAL, a more comprehensive evaluation suite designed specifically for instruction-tuned large language models. Unlike previous works, our evaluation involves a rigorous assessment of models based on problem-solving, writing ability, and alignment to human values. We take a holistic approach to analyze various factors affecting model performance, including the pretraining foundation, instruction-tuning data, and training methods. Our findings reveal that the quality of instruction data is the most crucial factor in scaling model performance. While open-source models demonstrate impressive writing abilities, there is substantial room for improvement in problem-solving and alignment. We are encouraged by the rapid development of models by the open-source community, but we also highlight the need for rigorous evaluation to support claims made about these models. Through INSTRUCTEVAL, we aim to foster a deeper understanding of instruction-tuned models and advancements in their capabilities. INSTRUCTEVAL is publicly available at https://github.com/declare-lab/instruct-eval.

Similar Work