Improve Student's Reasoning Generalizability Through Cascading Decomposed Cots Distillation · The Large Language Model Bible Contribute to LLM-Bible

Improve Student's Reasoning Generalizability Through Cascading Decomposed Cots Distillation

Dai Chengwei, Li Kun, Zhou Wei, Hu Songlin. Arxiv 2024

[Paper] [Code]    
Distillation Efficiency And Optimization Has Code Training Techniques

Large language models (LLMs) exhibit enhanced reasoning at larger scales, driving efforts to distill these capabilities into smaller models via teacher-student learning. Previous works simply fine-tune student models on teachers’ generated Chain-of-Thoughts (CoTs) data. Although these methods enhance in-domain (IND) reasoning performance, they struggle to generalize to out-of-domain (OOD) tasks. We believe that the widespread spurious correlations between questions and answers may lead the model to preset a specific answer which restricts the diversity and generalizability of its reasoning process. In this paper, we propose Cascading Decomposed CoTs Distillation (CasCoD) to address these issues by decomposing the traditional single-step learning process into two cascaded learning steps. Specifically, by restructuring the training objectives – removing the answer from outputs and concatenating the question with the rationale as input – CasCoD’s two-step learning process ensures that students focus on learning rationales without interference from the preset answers, thus improving reasoning generalizability. Extensive experiments demonstrate the effectiveness of CasCoD on both IND and OOD benchmark reasoning datasets. Code can be found at https://github.com/C-W-D/CasCoD.

Similar Work