An Empirical Study Of Retrieval Augmented Generation With Chain-of-thought · The Large Language Model Bible Contribute to LLM-Bible

An Empirical Study Of Retrieval Augmented Generation With Chain-of-thought

Zhao Yuetong, Cao Hongyu, Zhao Xianyu, Ou Zhijian. Arxiv 2024

[Paper]    
Fine Tuning GPT Model Architecture Pretraining Methods RAG Reinforcement Learning Tools Training Techniques

Since the launch of ChatGPT at the end of 2022, generative dialogue models represented by ChatGPT have quickly become essential tools in daily life. As user expectations increase, enhancing the capability of generative dialogue models to solve complex problems has become a focal point of current research. This paper delves into the effectiveness of the RAFT (Retrieval Augmented Fine-Tuning) method in improving the performance of Generative dialogue models. RAFT combines chain-of-thought with model supervised fine-tuning (SFT) and retrieval augmented generation (RAG), which significantly enhanced the model’s information extraction and logical reasoning abilities. We evaluated the RAFT method across multiple datasets and analysed its performance in various reasoning tasks, including long-form QA and short-form QA tasks, tasks in both Chinese and English, and supportive and comparison reasoning tasks. Notably, it addresses the gaps in previous research regarding long-form QA tasks and Chinese datasets. Moreover, we also evaluate the benefit of the chain-of-thought (CoT) in the RAFT method. This work offers valuable insights for studies focused on enhancing the performance of generative dialogue models.

Similar Work