RAGLAB: A Modular And Research-oriented Unified Framework For Retrieval-augmented Generation · The Large Language Model Bible Contribute to LLM-Bible

RAGLAB: A Modular And Research-oriented Unified Framework For Retrieval-augmented Generation

Zhang Xuanwang, Song Yunze, Wang Yidong, Tang Shuyun, Li Xinfeng, Zeng Zhengran, Wu Zhen, Ye Wei, Xu Wenyuan, Zhang Yue, Dai Xinyu, Zhang Shikun, Wen Qingsong. Arxiv 2024

[Paper]    
Ethics And Bias RAG Tools

Large Language Models (LLMs) demonstrate human-level capabilities in dialogue, reasoning, and knowledge retention. However, even the most advanced LLMs face challenges such as hallucinations and real-time updating of their knowledge. Current research addresses this bottleneck by equipping LLMs with external knowledge, a technique known as Retrieval Augmented Generation (RAG). However, two key issues constrained the development of RAG. First, there is a growing lack of comprehensive and fair comparisons between novel RAG algorithms. Second, open-source tools such as LlamaIndex and LangChain employ high-level abstractions, which results in a lack of transparency and limits the ability to develop novel algorithms and evaluation metrics. To close this gap, we introduce RAGLAB, a modular and research-oriented open-source library. RAGLAB reproduces 6 existing algorithms and provides a comprehensive ecosystem for investigating RAG algorithms. Leveraging RAGLAB, we conduct a fair comparison of 6 RAG algorithms across 10 benchmarks. With RAGLAB, researchers can efficiently compare the performance of various algorithms and develop novel algorithms.

Similar Work