See What Llms Cannot Answer: A Self-challenge Framework For Uncovering LLM Weaknesses · The Large Language Model Bible Contribute to LLM-Bible

See What Llms Cannot Answer: A Self-challenge Framework For Uncovering LLM Weaknesses

Chen Yulong, Liu Yang, Yan Jianhao, Bai Xuefeng, Zhong Ming, Yang Yinghao, Yang Ziyi, Zhu Chenguang, Zhang Yue. Arxiv 2024

[Paper]    
Fine Tuning GPT Model Architecture Pretraining Methods Prompting Tools Training Techniques

The impressive performance of Large Language Models (LLMs) has consistently surpassed numerous human-designed benchmarks, presenting new challenges in assessing the shortcomings of LLMs. Designing tasks and finding LLMs’ limitations are becoming increasingly important. In this paper, we investigate the question of whether an LLM can discover its own limitations from the errors it makes. To this end, we propose a Self-Challenge evaluation framework with human-in-the-loop. Starting from seed instances that GPT-4 fails to answer, we prompt GPT-4 to summarize error patterns that can be used to generate new instances and incorporate human feedback on them to refine these patterns for generating more challenging data, iteratively. We end up with 8 diverse patterns, such as text manipulation and questions with assumptions. We then build a benchmark, SC-G4, consisting of 1,835 instances generated by GPT-4 using these patterns, with human-annotated gold responses. The SC-G4 serves as a challenging benchmark that allows for a detailed assessment of LLMs’ abilities. Our results show that only 44.96% of instances in SC-G4 can be answered correctly by GPT-4. Interestingly, our pilot study indicates that these error patterns also challenge other LLMs, such as Claude-3 and Llama-3, and cannot be fully resolved through fine-tuning. Our work takes the first step to demonstrate that LLMs can autonomously identify their inherent flaws and provide insights for future dynamic and automatic evaluation.

Similar Work