Structext-eval: An Autogenerated Benchmark For Evaluating Large Language Model's Ability In Structure-rich Text Understanding · The Large Language Model Bible Contribute to LLM-Bible

Structext-eval: An Autogenerated Benchmark For Evaluating Large Language Model's Ability In Structure-rich Text Understanding

Gu Zhouhong, Ye Haoning, Zhou Zeyang, Feng Hongwei, Xiao Yanghua. Arxiv 2024

[Paper] [Code]    
Fine Tuning Has Code Pretraining Methods Training Techniques

Given the substantial volumes of structured data held by many companies, enabling Large Language Models (LLMs) to directly understand structured text in non-structured forms could significantly enhance their capabilities across various business scenarios. To this end, we propose evaluation data generation method for assessing LLM’s ability in understanding the structure-rich text, which generates structured data of controllable complexity based on manually crafted question templates and generation rules. Building on this generation method, we introduce StrucText-Eval, a benchmark comprising 6,032 questions across 8 different structured languages and 29 specific tasks. Furthermore, considering human proficiency in rule-based tasks, we also present StrucText-Eval-Hard, which includes 3,016 questions designed to further examine the gap between LLMs and human performance. Results indicate that the best-performing LLM currently achieve an accuracy of 65.0% on StrucText-Eval-Hard, while human accuracy reaches up to 95.7%. Moreover, while fine-tuning using StrucText-Eval can enhance existing LLMs’ understanding of all structured languages, it does not necessarily improve performance across all task types. The benchmark and generation codes are open sourced in https://github.com/MikeGu721/StrucText-Eval

Similar Work