Data Generation Using Large Language Models For Text Classification: An Empirical Case Study · The Large Language Model Bible Contribute to LLM-Bible

Data Generation Using Large Language Models For Text Classification: An Empirical Case Study

Li Yinheng, Bonatti Rogerio, Abdali Sara, Wagle Justin, Koishida Kazuhito. Arxiv 2024

[Paper]    
Applications Prompting Training Techniques

Using Large Language Models (LLMs) to generate synthetic data for model training has become increasingly popular in recent years. While LLMs are capable of producing realistic training data, the effectiveness of data generation is influenced by various factors, including the choice of prompt, task complexity, and the quality, quantity, and diversity of the generated data. In this work, we focus exclusively on using synthetic data for text classification tasks. Specifically, we use natural language understanding (NLU) models trained on synthetic data to assess the quality of synthetic data from different generation approaches. This work provides an empirical analysis of the impact of these factors and offers recommendations for better data generation practices.

Similar Work