Towards Reliable And Fluent Large Language Models: Incorporating Feedback Learning Loops In QA Systems · The Large Language Model Bible Contribute to LLM-Bible

Towards Reliable And Fluent Large Language Models: Incorporating Feedback Learning Loops In QA Systems

Lee Dongyub, Whang Taesun, Lee Chanhee, Lim Heuiseok. Arxiv 2023

[Paper]    
Applications GPT Model Architecture RAG Tools Uncategorized

Large language models (LLMs) have emerged as versatile tools in various daily applications. However, they are fraught with issues that undermine their utility and trustworthiness. These include the incorporation of erroneous references (citation), the generation of hallucinated information (correctness), and the inclusion of superfluous or omission of crucial details (fluency). To ameliorate these concerns, this study makes several key contributions. First, we build a dataset to train a critic model capable of evaluating the citation, correctness, and fluency of responses generated by LLMs in QA systems. Second, we propose an automated feedback mechanism that leverages the critic model to offer real-time feedback on heterogeneous aspects of generated text. Third, we introduce a feedback learning loop that uses this critic model to iteratively improve the performance of the LLM responsible for response generation. Experimental results demonstrate the efficacy of our approach, showing substantial improvements in citation and fluency metrics for ChatGPT, including a 4% precision increase in citation and an approximately 8% enhancement in the MAUVE metric for fluency, while maintaining high levels of correctness.

Similar Work