P3: A Policy-driven, Pace-adaptive, And Diversity-promoted Framework For Optimizing LLM Training · The Large Language Model Bible Contribute to LLM-Bible

P3: A Policy-driven, Pace-adaptive, And Diversity-promoted Framework For Optimizing LLM Training

Yang Yingxuan, Wang Huayi, Wen Muning, Zhang Weinan. Arxiv 2024

[Paper]    
Efficiency And Optimization Fine Tuning Pretraining Methods Pruning Tools Training Techniques

In the rapidly evolving field of Large Language Models (LLMs), selecting high-quality data for fine-tuning is essential. This paper focuses on task-specific data pruning and selection to enhance fine-tuning. We introduce an innovative framework, termed P3, which improves LLM performance through a dynamic, adaptive training strategy. Specifically, P3 comprises the following components: (1) Policy-driven Difficulty Measurement: we begin by measuring the difficulty of data based on the model’s real-time performance, transitioning from static, predefined metrics to more dynamic and adaptable ones. (2) Pace-adaptive Selection: we employ self-paced learning (SPL) to gradually select increasingly challenging data, thereby progressively enhancing the model’s performance. (3) Diversity Promotion: we integrate Determinantal Point Process (DPP) into the selection process to promote the diversity within and between samples, enriching the learning process. We have validated our method on two well-known LLM datasets, APPS and MATH, designed for logical reasoning scenarios. The results show that our P3 framework significantly improves training outcomes compared to traditional methods. By fundamentally refining data selection and utilization strategies, P3 not only advances theoretical understanding of dynamic training approaches but also provides a versatile framework that can revolutionize model training in natural language processing.

Similar Work