A Tale Of Trust And Accuracy: Base Vs. Instruct Llms In RAG Systems · The Large Language Model Bible Contribute to LLM-Bible

A Tale Of Trust And Accuracy: Base Vs. Instruct Llms In RAG Systems

Cuconasu Florin, Trappolini Giovanni, Tonellotto Nicola, Silvestri Fabrizio. Arxiv 2024

[Paper]    
Applications RAG Training Techniques

Retrieval Augmented Generation (RAG) represents a significant advancement in artificial intelligence combining a retrieval phase with a generative phase, with the latter typically being powered by large language models (LLMs). The current common practices in RAG involve using “instructed” LLMs, which are fine-tuned with supervised training to enhance their ability to follow instructions and are aligned with human preferences using state-of-the-art techniques. Contrary to popular belief, our study demonstrates that base models outperform their instructed counterparts in RAG tasks by 20% on average under our experimental settings. This finding challenges the prevailing assumptions about the superiority of instructed LLMs in RAG applications. Further investigations reveal a more nuanced situation, questioning fundamental aspects of RAG and suggesting the need for broader discussions on the topic; or, as Fromm would have it, “Seldom is a glance at the statistics enough to understand the meaning of the figures”.

Similar Work