Followbench: A Multi-level Fine-grained Constraints Following Benchmark For Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Followbench: A Multi-level Fine-grained Constraints Following Benchmark For Large Language Models

Jiang Yuxin, Wang Yufei, Zeng Xingshan, Zhong Wanjun, Li Liangyou, Mi Fei, Shang Lifeng, Jiang Xin, Liu Qun, Wang Wei. Arxiv 2023

[Paper] [Code]    
Applications Has Code Prompting Reinforcement Learning

The ability to follow instructions is crucial for Large Language Models (LLMs) to handle various real-world applications. Existing benchmarks primarily focus on evaluating pure response quality, rather than assessing whether the response follows constraints stated in the instruction. To fill this research gap, in this paper, we propose FollowBench, a Multi-level Fine-grained Constraints Following Benchmark for LLMs. FollowBench comprehensively includes five different types (i.e., Content, Situation, Style, Format, and Example) of fine-grained constraints. To enable a precise constraint following estimation on diverse difficulties, we introduce a Multi-level mechanism that incrementally adds a single constraint to the initial instruction at each increased level. To assess whether LLMs’ outputs have satisfied every individual constraint, we propose to prompt strong LLMs with constraint-evolution paths to handle challenging open-ended instructions. By evaluating 13 closed-source and open-source popular LLMs on FollowBench, we highlight the weaknesses of LLMs in instruction following and point towards potential avenues for future work. The data and code are publicly available at https://github.com/YJiangcm/FollowBench.

Similar Work