Scaling Synthetic Data Creation With 1,000,000,000 Personas · The Large Language Model Bible Contribute to LLM-Bible

Scaling Synthetic Data Creation With 1,000,000,000 Personas

Chan Xin, Wang Xiaoyang, Yu Dian, Mi Haitao, Yu Dong. Arxiv 2024

[Paper]    
Applications Prompting RAG Reinforcement Learning Tools

We propose a novel persona-driven data synthesis methodology that leverages various perspectives within a large language model (LLM) to create diverse synthetic data. To fully exploit this methodology at scale, we introduce Persona Hub – a collection of 1 billion diverse personas automatically curated from web data. These 1 billion personas (~13% of the world’s total population), acting as distributed carriers of world knowledge, can tap into almost every perspective encapsulated within the LLM, thereby facilitating the creation of diverse synthetic data at scale for various scenarios. By showcasing Persona Hub’s use cases in synthesizing high-quality mathematical and logical reasoning problems, instructions (i.e., user prompts), knowledge-rich texts, game NPCs and tools (functions) at scale, we demonstrate persona-driven data synthesis is versatile, scalable, flexible, and easy to use, potentially driving a paradigm shift in synthetic data creation and applications in practice, which may have a profound impact on LLM research and development.

Similar Work