When Giant Language Brains Just Aren't Enough! Domain Pizzazz With Knowledge Sparkle Dust · The Large Language Model Bible Contribute to LLM-Bible

When Giant Language Brains Just Aren't Enough! Domain Pizzazz With Knowledge Sparkle Dust

Nguyen Minh-tien, Nguyen Duy-hung, Sabahi Shahab, Le Hung, Yang Jeff, Hotta Hajime. Arxiv 2023

[Paper]    
Applications GPT Model Architecture Reinforcement Learning

Large language models (LLMs) have significantly advanced the field of natural language processing, with GPT models at the forefront. While their remarkable performance spans a range of tasks, adapting LLMs for real-world business scenarios still poses challenges warranting further investigation. This paper presents an empirical analysis aimed at bridging the gap in adapting LLMs to practical use cases. To do that, we select the question answering (QA) task of insurance as a case study due to its challenge of reasoning. Based on the task we design a new model relied on LLMs which are empowered by additional knowledge extracted from insurance policy rulebooks and DBpedia. The additional knowledge helps LLMs to understand new concepts of insurance for domain adaptation. Preliminary results on two QA datasets show that knowledge enhancement significantly improves the reasoning ability of GPT-3.5 (55.80% and 57.83% in terms of accuracy). The analysis also indicates that existing public knowledge bases, e.g., DBPedia is beneficial for knowledge enhancement. Our findings reveal that the inherent complexity of business scenarios often necessitates the incorporation of domain-specific knowledge and external resources for effective problem-solving.

Similar Work