SYNTHEVAL: Hybrid Behavioral Testing Of NLP Models With Synthetic Checklists · The Large Language Model Bible Contribute to LLM-Bible

SYNTHEVAL: Hybrid Behavioral Testing Of NLP Models With Synthetic Checklists

Zhao Raoyuan, Köksal Abdullatif, Liu Yihong, Weissweiler Leonie, Korhonen Anna, Schütze Hinrich. Arxiv 2024

[Paper] [Code]    
Has Code RAG Tools Uncategorized

Traditional benchmarking in NLP typically involves using static held-out test sets. However, this approach often results in an overestimation of performance and lacks the ability to offer comprehensive, interpretable, and dynamic assessments of NLP models. Recently, works like DynaBench (Kiela et al., 2021) and CheckList (Ribeiro et al., 2020) have addressed these limitations through behavioral testing of NLP models with test types generated by a multistep human-annotated pipeline. Unfortunately, manually creating a variety of test types requires much human labor, often at prohibitive cost. In this work, we propose SYNTHEVAL, a hybrid behavioral testing framework that leverages large language models (LLMs) to generate a wide range of test types for a comprehensive evaluation of NLP models. SYNTHEVAL first generates sentences via LLMs using controlled generation, and then identifies challenging examples by comparing the predictions made by LLMs with task-specific NLP models. In the last stage, human experts investigate the challenging examples, manually design templates, and identify the types of failures the taskspecific models consistently exhibit. We apply SYNTHEVAL to two classification tasks, sentiment analysis and toxic language detection, and show that our framework is effective in identifying weaknesses of strong models on these tasks. We share our code in https://github.com/Loreley99/SynthEval_CheckList.

Similar Work