G-DIG: Towards Gradient-based Diverse And High-quality Instruction Data Selection For Machine Translation · The Large Language Model Bible Contribute to LLM-Bible

G-DIG: Towards Gradient-based Diverse And High-quality Instruction Data Selection For Machine Translation

Pan Xingyuan, Huang Luyang, Kang Liyan, Liu Zhicheng, Lu Yu, Cheng Shanbo. Arxiv 2024

[Paper]    
Applications Training Techniques

Large Language Models (LLMs) have demonstrated remarkable abilities in general scenarios. Instruction finetuning empowers them to align with humans in various tasks. Nevertheless, the Diversity and Quality of the instruction data remain two main challenges for instruction finetuning. With regard to this, in this paper, we propose a novel gradient-based method to automatically select high-quality and diverse instruction finetuning data for machine translation. Our key innovation centers around analyzing how individual training examples influence the model during training. Specifically, we select training examples that exert beneficial influences on the model as high-quality ones by means of Influence Function plus a small high-quality seed dataset. Moreover, to enhance the diversity of the training data we maximize the variety of influences they have on the model by clustering on their gradients and resampling. Extensive experiments on WMT22 and FLORES translation tasks demonstrate the superiority of our methods, and in-depth analysis further validates their effectiveness and generalization.

Similar Work