Improving Language Models Via Plug-and-play Retrieval Feedback · The Large Language Model Bible Contribute to LLM-Bible

Improving Language Models Via Plug-and-play Retrieval Feedback

Yu Wenhao, Zhang Zhihan, Liang Zhenwen, Jiang Meng, Sabharwal Ashish. Arxiv 2023

[Paper]    
Applications Few Shot Fine Tuning Pretraining Methods Reinforcement Learning Tools Training Techniques

Large language models (LLMs) exhibit remarkable performance across various NLP tasks. However, they often generate incorrect or hallucinated information, which hinders their practical applicability in real-world scenarios. Human feedback has been shown to effectively enhance the factuality and quality of generated content, addressing some of these limitations. However, this approach is resource-intensive, involving manual input and supervision, which can be time-consuming and expensive. Moreover, it cannot be provided during inference, further limiting its practical utility in dynamic and interactive applications. In this paper, we introduce ReFeed, a novel pipeline designed to enhance LLMs by providing automatic retrieval feedback in a plug-and-play framework without the need for expensive fine-tuning. ReFeed first generates initial outputs, then utilizes a retrieval model to acquire relevant information from large document collections, and finally incorporates the retrieved information into the in-context demonstration for output refinement, thereby addressing the limitations of LLMs in a more efficient and cost-effective manner. Experiments on four knowledge-intensive benchmark datasets demonstrate our proposed ReFeed could improve over +6.0% under zero-shot setting and +2.5% under few-shot setting, compared to baselines without using retrieval feedback.

Similar Work