Advancing LLM Reasoning Generalists With Preference Trees · The Large Language Model Bible Contribute to LLM-Bible

Advancing LLM Reasoning Generalists With Preference Trees

Yuan Lifan, Cui Ganqu, Wang Hanbin, Ding Ning, Wang Xingyao, Deng Jia, Shan Boji, Chen Huimin, Xie Ruobing, Lin Yankai, Liu Zhenghao, Zhou Bowen, Peng Hao, Liu Zhiyuan, Sun Maosong. Arxiv 2024

[Paper]    
Applications Fine Tuning GPT Model Architecture Pretraining Methods Reinforcement Learning Training Techniques

We introduce Eurus, a suite of large language models (LLMs) optimized for reasoning. Finetuned from Mistral-7B and CodeLlama-70B, Eurus models achieve state-of-the-art results among open-source models on a diverse set of benchmarks covering mathematics, code generation, and logical reasoning problems. Notably, Eurus-70B beats GPT-3.5 Turbo in reasoning through a comprehensive benchmarking across 12 tests covering five tasks, and achieves a 33.3% pass@1 accuracy on LeetCode and 32.6% on TheoremQA, two challenging benchmarks, substantially outperforming existing open-source models by margins more than 13.3%. The strong performance of Eurus can be primarily attributed to UltraInteract, our newly-curated large-scale, high-quality alignment dataset specifically designed for complex reasoning tasks. UltraInteract can be used in both supervised fine-tuning and preference learning. For each instruction, it includes a preference tree consisting of (1) reasoning chains with diverse planning strategies in a unified format, (2) multi-turn interaction trajectories with the environment and the critique, and (3) pairwise data to facilitate preference learning. UltraInteract allows us to conduct an in-depth exploration of preference learning for reasoning tasks. Our investigation reveals that some well-established preference learning algorithms may be less suitable for reasoning tasks compared to their effectiveness in general conversations. Inspired by this, we derive a novel reward modeling objective which, together with UltraInteract, leads to a strong reward model.

Similar Work