QCRD: Quality-guided Contrastive Rationale Distillation For Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

QCRD: Quality-guided Contrastive Rationale Distillation For Large Language Models

Wang Wei, Li Zhaowei, Xu Qi, Cai Yiqing, Song Hang, Qi Qi, Zhou Ran, Huang Zhida, Wang Tao, Xiao Li. Arxiv 2024

[Paper]    
Distillation Efficiency And Optimization Fine Tuning Pretraining Methods Reinforcement Learning Training Techniques

Deploying large language models (LLMs) poses challenges in terms of resource limitations and inference efficiency. To address these challenges, recent research has focused on using smaller task-specific language models, which are enhanced by distilling the knowledge rationales generated by LLMs. However, previous works mostly emphasize the effectiveness of positive knowledge, while overlooking the knowledge noise and the exploration of negative knowledge. In this paper, we first propose a general approach called quality-guided contrastive rationale distillation for reasoning capacity learning, considering contrastive learning perspectives. For the learning of positive knowledge, we collect positive rationales through self-consistency to denoise the LLM rationales generated by temperature sampling. For the negative knowledge distillation, we generate negative rationales using temperature sampling for the iteration-before smaller language models themselves. Finally, a contrastive loss is designed to better distill the positive and negative rationales into the smaller language model, where an online-update discriminator is used to judge the qualities of rationales and assign weights for better optimizing the training process. Through extensive experiments on multiple reasoning tasks, we demonstrate that our method consistently outperforms the previous distillation methods and produces higher-quality rationales.

Similar Work