FOFO: A Benchmark To Evaluate Llms' Format-following Capability · The Large Language Model Bible Contribute to LLM-Bible

FOFO: A Benchmark To Evaluate Llms' Format-following Capability

Xia Congying, Xing Chen, Du Jiangshu, Yang Xinyi, Feng Yihao, Xu Ran, Yin Wenpeng, Xiong Caiming. Arxiv 2024

[Paper] [Code]    
Agentic GPT Has Code Model Architecture Reinforcement Learning Uncategorized

This paper presents FoFo, a pioneering benchmark for evaluating large language models’ (LLMs) ability to follow complex, domain-specific formats, a crucial yet underexamined capability for their application as AI agents. Despite LLMs’ advancements, existing benchmarks fail to assess their format-following proficiency adequately. FoFo fills this gap with a diverse range of real-world formats and instructions, developed through an AI-Human collaborative method. Our evaluation across both open-source (e.g., Llama 2, WizardLM) and closed-source (e.g., GPT-4, PALM2, Gemini) LLMs highlights three key findings: open-source models significantly lag behind closed-source ones in format adherence; LLMs’ format-following performance is independent of their content generation quality; and LLMs’ format proficiency varies across different domains. These insights suggest the need for specialized tuning for format-following skills and highlight FoFo’s role in guiding the selection of domain-specific AI agents. FoFo is released here at https://github.com/SalesforceAIResearch/FoFo.

Similar Work