Rankrag: Unifying Context Ranking With Retrieval-augmented Generation In Llms · The Large Language Model Bible Contribute to LLM-Bible

Rankrag: Unifying Context Ranking With Retrieval-augmented Generation In Llms

Yu Yue, Ping Wei, Liu Zihan, Wang Boxin, You Jiaxuan, Zhang Chao, Shoeybi Mohammad, Catanzaro Bryan. Arxiv 2024

[Paper]    
Fine Tuning GPT Model Architecture Pretraining Methods RAG Tools Training Techniques

Large language models (LLMs) typically utilize the top-k contexts from a retriever in retrieval-augmented generation (RAG). In this work, we propose a novel instruction fine-tuning framework RankRAG, which instruction-tunes a single LLM for the dual purpose of context ranking and answer generation in RAG. In particular, the instruction-tuned LLMs work surprisingly well by adding a small fraction of ranking data into the training blend, and outperform existing expert ranking models, including the same LLM exclusively fine-tuned on a large amount of ranking data. For generation, we compare our model with many strong baselines, including GPT-4-0613, GPT-4-turbo-2024-0409, and ChatQA-1.5, an open-sourced model with the state-of-the-art performance on RAG benchmarks. Specifically, our Llama3-RankRAG significantly outperforms Llama3-ChatQA-1.5 and GPT-4 models on nine knowledge-intensive benchmarks. In addition, it also performs comparably to GPT-4 on five RAG benchmarks in the biomedical domain without instruction fine-tuning on biomedical data, demonstrating its superb capability for generalization to new domains.

Similar Work