Trapping LLM Hallucinations Using Tagged Context Prompts · The Large Language Model Bible Contribute to LLM-Bible

Trapping LLM Hallucinations Using Tagged Context Prompts

Feldman Philip, Foulds James R., Pan Shimei. Arxiv 2023

[Paper]    
Agentic GPT Model Architecture Prompting Reinforcement Learning Tools Uncategorized

Recent advances in large language models (LLMs), such as ChatGPT, have led to highly sophisticated conversation agents. However, these models suffer from “hallucinations,” where the model generates false or fabricated information. Addressing this challenge is crucial, particularly with AI-driven platforms being adopted across various sectors. In this paper, we propose a novel method to recognize and flag instances when LLMs perform outside their domain knowledge, and ensuring users receive accurate information. We find that the use of context combined with embedded tags can successfully combat hallucinations within generative language models. To do this, we baseline hallucination frequency in no-context prompt-response pairs using generated URLs as easily-tested indicators of fabricated data. We observed a significant reduction in overall hallucination when context was supplied along with question prompts for tested generative engines. Lastly, we evaluated how placing tags within contexts impacted model responses and were able to eliminate hallucinations in responses with 98.88% effectiveness.

Similar Work