Teaching Language Models To Hallucinate Less With Synthetic Tasks · The Large Language Model Bible Contribute to LLM-Bible

Teaching Language Models To Hallucinate Less With Synthetic Tasks

Jones Erik, Palangi Hamid, Simões Clarisse, Chandrasekaran Varun, Mukherjee Subhabrata, Mitra Arindam, Awadallah Ahmed, Kamar Ece. Arxiv 2023

[Paper]    
Applications Efficiency And Optimization Fine Tuning Pretraining Methods Reinforcement Learning Training Techniques

Large language models (LLMs) frequently hallucinate on abstractive summarization tasks such as document-based question-answering, meeting summarization, and clinical report generation, even though all necessary information is included in context. However, optimizing LLMs to hallucinate less on these tasks is challenging, as hallucination is hard to efficiently evaluate at each optimization step. In this work, we show that reducing hallucination on a synthetic task can also reduce hallucination on real-world downstream tasks. Our method, SynTra, first designs a synthetic task where hallucinations are easy to elicit and measure. It next optimizes the LLM’s system message via prefix-tuning on the synthetic task, and finally transfers the system message to realistic, hard-to-optimize tasks. Across three realistic abstractive summarization tasks, SynTra reduces hallucination for two 13B-parameter LLMs using only a synthetic retrieval task for supervision. We also find that optimizing the system message rather than the model weights can be critical; fine-tuning the entire model on the synthetic task can counterintuitively increase hallucination. Overall, SynTra demonstrates that the extra flexibility of working with synthetic data can help mitigate undesired behaviors in practice.

Similar Work