Prompt-rag: Pioneering Vector Embedding-free Retrieval-augmented Generation In Niche Domains, Exemplified By Korean Medicine · The Large Language Model Bible Contribute to LLM-Bible

Prompt-rag: Pioneering Vector Embedding-free Retrieval-augmented Generation In Niche Domains, Exemplified By Korean Medicine

Kang Bongsu, Kim Jundong, Yun Tae-rim, Kim Chang-eop. Arxiv 2024

[Paper]    
GPT Model Architecture Prompting RAG Reinforcement Learning

We propose a natural language prompt-based retrieval augmented generation (Prompt-RAG), a novel approach to enhance the performance of generative large language models (LLMs) in niche domains. Conventional RAG methods mostly require vector embeddings, yet the suitability of generic LLM-based embedding representations for specialized domains remains uncertain. To explore and exemplify this point, we compared vector embeddings from Korean Medicine (KM) and Conventional Medicine (CM) documents, finding that KM document embeddings correlated more with token overlaps and less with human-assessed document relatedness, in contrast to CM embeddings. Prompt-RAG, distinct from conventional RAG models, operates without the need for embedding vectors. Its performance was assessed through a Question-Answering (QA) chatbot application, where responses were evaluated for relevance, readability, and informativeness. The results showed that Prompt-RAG outperformed existing models, including ChatGPT and conventional vector embedding-based RAGs, in terms of relevance and informativeness. Despite challenges like content structuring and response latency, the advancements in LLMs are expected to encourage the use of Prompt-RAG, making it a promising tool for other domains in need of RAG methods.

Similar Work