Mixed Distillation Helps Smaller Language Model Better Reasoning · The Large Language Model Bible Contribute to LLM-Bible

Mixed Distillation Helps Smaller Language Model Better Reasoning

Li Chenglin, Chen Qianglong, Li Liangyue, Wang Caiyu, Li Yicheng, Chen Zulong, Zhang Yin. Arxiv 2023

[Paper]    
Applications Distillation Efficiency And Optimization GPT Model Architecture Prompting Reinforcement Learning Tools

While large language models (LLMs) have demonstrated exceptional performance in recent natural language processing (NLP) tasks, their deployment poses substantial challenges due to high computational and memory demands in real-world applications. Recent studies have focused on enhancing smaller models through knowledge distillation from LLMs, yielding promising results. However, these models often struggle to match the performance of LLMs, especially in tasks that require reasoning. In this work, we introduce Mixed Distillation (MD) framework, which capitalizes on the strengths of Program of Thought (PoT) and Chain of Thought (CoT) capabilities within LLMs, combining multiple prompting techniques and distilling these capabilities into smaller models. Our experimental results show that MD significantly enhances the single-path and multi-path reasoning ability of smaller models in various tasks. In terms of accuracy and generality of reasoning tasks, the model generated by it exceeds the comprehensive performance of two individually distilled models. Notably, LLaMA2-7B and CodeLlama-7B using MD achieved remarkable improvements of (84.5%) and (85.5%), respectively, outperforming GPT-3.5-Turbo by (2.5%) and (3.5%), on the SVAMP benchmark.

Similar Work