Context-augmented Retrieval: A Novel Framework For Fast Information Retrieval Based Response Generation Using Large Language Model · The Large Language Model Bible Contribute to LLM-Bible

Context-augmented Retrieval: A Novel Framework For Fast Information Retrieval Based Response Generation Using Large Language Model

Ganesh Sai, Purwar Anupam, B Gautam. Arxiv 2024

[Paper]    
Applications Prompting RAG Tools

Generating high-quality answers consistently by providing contextual information embedded in the prompt passed to the Large Language Model (LLM) is dependent on the quality of information retrieval. As the corpus of contextual information grows, the answer/inference quality of Retrieval Augmented Generation (RAG) based Question Answering (QA) systems declines. This work solves this problem by combining classical text classification with the Large Language Model (LLM) to enable quick information retrieval from the vector store and ensure the relevancy of retrieved information. For the same, this work proposes a new approach Context Augmented retrieval (CAR), where partitioning of vector database by real-time classification of information flowing into the corpus is done. CAR demonstrates good quality answer generation along with significant reduction in information retrieval and answer generation time.

Similar Work