Factuality Of Large Language Models In The Year 2024 · The Large Language Model Bible Contribute to LLM-Bible

Factuality Of Large Language Models In The Year 2024

Wang Yuxia, Wang Minghan, Manzoor Muhammad Arslan, Liu Fei, Georgiev Georgi, Das Rocktim Jyoti, Nakov Preslav. Arxiv 2024

[Paper]    
Attention Mechanism Model Architecture Reinforcement Learning Survey Paper TACL

Large language models (LLMs), especially when instruction-tuned for chat, have become part of our daily lives, freeing people from the process of searching, extracting, and integrating information from multiple sources by offering a straightforward answer to a variety of questions in a single place. Unfortunately, in many cases, LLM responses are factually incorrect, which limits their applicability in real-world scenarios. As a result, research on evaluating and improving the factuality of LLMs has attracted a lot of research attention recently. In this survey, we critically analyze existing work with the aim to identify the major challenges and their associated causes, pointing out to potential solutions for improving the factuality of LLMs, and analyzing the obstacles to automated factuality evaluation for open-ended text generation. We further offer an outlook on where future research should go.

Similar Work