Searching For Best Practices In Retrieval-augmented Generation · The Large Language Model Bible Contribute to LLM-Bible

Searching For Best Practices In Retrieval-augmented Generation

Wang Xiaohua, Wang Zhenghua, Gao Xuan, Zhang Feiran, Wu Yixin, Xu Zhibo, Shi Tianyuan, Wang Zhengyuan, Li Shizheng, Qian Qi, Yin Ruicheng, Lv Changze, Zheng Xiaoqing, Huang Xuanjing. Arxiv 2024

[Paper]    
Efficiency And Optimization Multimodal Models RAG Reinforcement Learning

Retrieval-augmented generation (RAG) techniques have proven to be effective in integrating up-to-date information, mitigating hallucinations, and enhancing response quality, particularly in specialized domains. While many RAG approaches have been proposed to enhance large language models through query-dependent retrievals, these approaches still suffer from their complex implementation and prolonged response times. Typically, a RAG workflow involves multiple processing steps, each of which can be executed in various ways. Here, we investigate existing RAG approaches and their potential combinations to identify optimal RAG practices. Through extensive experiments, we suggest several strategies for deploying RAG that balance both performance and efficiency. Moreover, we demonstrate that multimodal retrieval techniques can significantly enhance question-answering capabilities about visual inputs and accelerate the generation of multimodal content using a “retrieval as generation” strategy.

Similar Work