RAG Based Question-answering For Contextual Response Prediction System · The Large Language Model Bible Contribute to LLM-Bible

RAG Based Question-answering For Contextual Response Prediction System

Veturi Sriram, Vaichal Saurabh, Jagadheesh Reshma Lal, Tripto Nafis Irtiza, Yan Nian. Arxiv 2024

[Paper]    
Agentic Applications BERT Model Architecture RAG Reinforcement Learning Tools Uncategorized

Large Language Models (LLMs) have shown versatility in various Natural Language Processing (NLP) tasks, including their potential as effective question-answering systems. However, to provide precise and relevant information in response to specific customer queries in industry settings, LLMs require access to a comprehensive knowledge base to avoid hallucinations. Retrieval Augmented Generation (RAG) emerges as a promising technique to address this challenge. Yet, developing an accurate question-answering framework for real-world applications using RAG entails several challenges: 1) data availability issues, 2) evaluating the quality of generated content, and 3) the costly nature of human evaluation. In this paper, we introduce an end-to-end framework that employs LLMs with RAG capabilities for industry use cases. Given a customer query, the proposed system retrieves relevant knowledge documents and leverages them, along with previous chat history, to generate response suggestions for customer service agents in the contact centers of a major retail company. Through comprehensive automated and human evaluations, we show that this solution outperforms the current BERT-based algorithms in accuracy and relevance. Our findings suggest that RAG-based LLMs can be an excellent support to human customer service representatives by lightening their workload.

Similar Work