Rethinking The Instruction Quality: LIFT Is What You Need · The Large Language Model Bible Contribute to LLM-Bible

Rethinking The Instruction Quality: LIFT Is What You Need

Xu Yang, Yao Yongqiang, Huang Yufan, Qi Mengnan, Wang Maoquan, Gu Bin, Sundaresan Neel. Arxiv 2023

[Paper]    
Merging

Instruction tuning, a specialized technique to enhance large language model (LLM) performance via instruction datasets, relies heavily on the quality of employed data. Existing quality improvement methods alter instruction data through dataset expansion or curation. However, the expansion method risks data redundancy, potentially compromising LLM performance, while the curation approach confines the LLM’s potential to the original dataset. Our aim is to surpass the original data quality without encountering these shortcomings. To achieve this, we propose LIFT (LLM Instruction Fusion Transfer), a novel and versatile paradigm designed to elevate the instruction quality to new heights. LIFT strategically broadens data distribution to encompass more high-quality subspaces and eliminates redundancy, concentrating on high-quality segments across overall data subspaces. Experimental results demonstrate that, even with a limited quantity of high-quality instruction data selected by our paradigm, LLMs not only consistently uphold robust performance across various tasks but also surpass some state-of-the-art results, highlighting the significant improvement in instruction quality achieved by our paradigm.

Similar Work