Kapqa: Knowledge-augmented Product Question-answering · The Large Language Model Bible Contribute to LLM-Bible

Kapqa: Knowledge-augmented Product Question-answering

Eppalapally Swetha, Dangi Daksh, Bhat Chaithra, Gupta Ankita, Zhang Ruiyi, Agarwal Shubham, Bagga Karishma, Yoon Seunghyun, Lipka Nedim, Rossi Ryan A., Dernoncourt Franck. Arxiv 2024

[Paper]    
Applications RAG Reinforcement Learning Tools

Question-answering for domain-specific applications has recently attracted much interest due to the latest advancements in large language models (LLMs). However, accurately assessing the performance of these applications remains a challenge, mainly due to the lack of suitable benchmarks that effectively simulate real-world scenarios. To address this challenge, we introduce two product question-answering (QA) datasets focused on Adobe Acrobat and Photoshop products to help evaluate the performance of existing models on domain-specific product QA tasks. Additionally, we propose a novel knowledge-driven RAG-QA framework to enhance the performance of the models in the product QA task. Our experiments demonstrated that inducing domain knowledge through query reformulation allowed for increased retrieval and generative performance when compared to standard RAG-QA methods. This improvement, however, is slight, and thus illustrates the challenge posed by the datasets introduced.

Similar Work