Learning Fine-grained Grounded Citations For Attributed Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Learning Fine-grained Grounded Citations For Attributed Large Language Models

Huang Lei, Feng Xiaocheng, Ma Weitao, Gu Yuxuan, Zhong Weihong, Feng Xiachong, Yu Weijiang, Peng Weihua, Tang Duyu, Tu Dandan, Qin Bing. Arxiv 2024

[Paper]    
GPT In Context Learning Model Architecture Prompting RAG Reinforcement Learning Tools Training Techniques

Despite the impressive performance on information-seeking tasks, large language models (LLMs) still struggle with hallucinations. Attributed LLMs, which augment generated text with in-line citations, have shown potential in mitigating hallucinations and improving verifiability. However, current approaches suffer from suboptimal citation quality due to their reliance on in-context learning. Furthermore, the practice of citing only coarse document identifiers makes it challenging for users to perform fine-grained verification. In this work, we introduce FRONT, a training framework designed to teach LLMs to generate Fine-Grained Grounded Citations. By grounding model outputs in fine-grained supporting quotes, these quotes guide the generation of grounded and consistent responses, not only improving citation quality but also facilitating fine-grained verification. Experiments on the ALCE benchmark demonstrate the efficacy of FRONT in generating superior grounded responses and highly supportive citations. With LLaMA-2-7B, the framework significantly outperforms all the baselines, achieving an average of 14.21% improvement in citation quality across all datasets, even surpassing ChatGPT.

Similar Work