Exploring Llms As A Source Of Targeted Synthetic Textual Data To Minimize High Confidence Misclassifications · The Large Language Model Bible Contribute to LLM-Bible

Exploring Llms As A Source Of Targeted Synthetic Textual Data To Minimize High Confidence Misclassifications

Lippmann Philip, Spaan Matthijs T. J., Yang Jie. Arxiv 2024

[Paper]    
Security Training Techniques

Natural Language Processing (NLP) models optimized for predictive performance often make high confidence errors and suffer from vulnerability to adversarial and out-of-distribution data. Existing work has mainly focused on mitigation of such errors using either humans or an automated approach. In this study, we explore the usage of large language models (LLMs) for data augmentation as a potential solution to the issue of NLP models making wrong predictions with high confidence during classification tasks. We compare the effectiveness of synthetic data generated by LLMs with that of human data obtained via the same procedure. For mitigation, humans or LLMs provide natural language characterizations of high confidence misclassifications to generate synthetic data, which are then used to extend the training set. We conduct an extensive evaluation of our approach on three classification tasks and demonstrate its effectiveness in reducing the number of high confidence misclassifications present in the model, all while maintaining the same level of accuracy. Moreover, we find that the cost gap between humans and LLMs surpasses an order of magnitude, as LLMs attain human-like performance while being more scalable.

Similar Work