Datasculpt: Crafting Data Landscapes For LLM Post-training Through Multi-objective Partitioning · The Large Language Model Bible Contribute to LLM-Bible

Datasculpt: Crafting Data Landscapes For LLM Post-training Through Multi-objective Partitioning

Lu Keer, Liang Zheng, Nie Xiaonan, Pan Da, Zhang Shusen, Zhao Keshi, Chen Weipeng, Zhou Zenan, Dong Guosheng, Zhang Wentao, Cui Bin. Arxiv 2024

[Paper]    
Applications Ethics And Bias Model Architecture Tools Training Techniques

The effectiveness of long-context modeling is important for Large Language Models (LLMs) in various applications. Despite their potential, LLMs’ efficacy in processing long context does not consistently meet expectations, posing significant challenges for efficient management of prolonged sequences in training. This difficulty is compounded by the scarcity of comprehensive and diverse training datasets suitable for long sequences, which stems from inherent length biases across different data sources, and the logistical complexities associated with massive data management for training in extended contexts. In this work, we introduce DataSculpt, a data construction framework designed to strategically augment the data architecture for extended-context training. Our thorough evaluations demonstrate DataSculpt’s remarkable capacity to boost long-context training performance, achieving improvements including an 18.09% increase in retrieval augmentation, 21.23% in summarization, 21.27% in reading comprehension, and a 3.81% rise in code completion, all while preserving the models’ overall proficiency with a 4.88% improvement.

Similar Work