Enhancing Retrieval-augmented Lms With A Two-stage Consistency Learning Compressor · The Large Language Model Bible Contribute to LLM-Bible

Enhancing Retrieval-augmented Lms With A Two-stage Consistency Learning Compressor

Xu Chuankai, Zhao Dongming, Wang Bo, Xing Hanwen. Arxiv 2024

[Paper]    
Efficiency And Optimization RAG Tools

Despite the prevalence of retrieval-augmented language models (RALMs), the seamless integration of these models with retrieval mechanisms to enhance performance in document-based tasks remains challenging. While some post-retrieval processing Retrieval-Augmented Generation (RAG) methods have achieved success, most still lack the ability to distinguish pertinent from extraneous information, leading to potential inconsistencies and reduced precision in the generated output, which subsequently affects the truthfulness of the language model’s responses. To address these limitations, this work proposes a novel two-stage consistency learning approach for retrieved information compression in retrieval-augmented language models to enhance performance. By incorporating consistency learning, the aim is to generate summaries that maintain coherence and alignment with the intended semantic representations of a teacher model while improving faithfulness to the original retrieved documents. The proposed method is empirically validated across multiple datasets, demonstrating notable enhancements in precision and efficiency for question-answering tasks. It outperforms existing baselines and showcases the synergistic effects of combining contrastive and consistency learning paradigms within the retrieval-augmented generation framework.

Similar Work