Improving Factuality In Large Language Models Via Decoding-time Hallucinatory And Truthful Comparators · The Large Language Model Bible Contribute to LLM-Bible

Improving Factuality In Large Language Models Via Decoding-time Hallucinatory And Truthful Comparators

Yang Dingkang, Xiao Dongling, Wei Jinjie, Li Mingcheng, Chen Zhaoyu, Li Ke, Zhang Lihua. Arxiv 2024

[Paper]    
Fine Tuning Pretraining Methods Tools Training Techniques

Despite their remarkable capabilities, Large Language Models (LLMs) are prone to generate responses that contradict verifiable facts, i.e., unfaithful hallucination content. Existing efforts generally focus on optimizing model parameters or editing semantic representations, which compromise the internal factual knowledge of target LLMs. In addition, hallucinations typically exhibit multifaceted patterns in downstream tasks, limiting the model’s holistic performance across tasks. In this paper, we propose a Comparator-driven Decoding-Time (CDT) framework to alleviate the response hallucination. Firstly, we construct hallucinatory and truthful comparators with multi-task fine-tuning samples. In this case, we present an instruction prototype-guided mixture of experts strategy to enhance the ability of the corresponding comparators to capture different hallucination or truthfulness patterns in distinct task instructions. CDT constrains next-token predictions to factuality-robust distributions by contrasting the logit differences between the target LLMs and these comparators. Systematic experiments on multiple downstream tasks show that our framework can significantly improve the model performance and response factuality.

Similar Work