Generative Large Language Models In Automated Fact-checking: A Survey · The Large Language Model Bible Contribute to LLM-Bible

Generative Large Language Models In Automated Fact-checking: A Survey

Vykopal Ivan, Pikuliak Matúš, Ostermann Simon, Šimko Marián. Arxiv 2024

[Paper]    
Fine Tuning Pretraining Methods Prompting RAG Reinforcement Learning Survey Paper Tools Training Techniques

The dissemination of false information across online platforms poses a serious societal challenge, necessitating robust measures for information verification. While manual fact-checking efforts are still instrumental, the growing volume of false information requires automated methods. Large language models (LLMs) offer promising opportunities to assist fact-checkers, leveraging LLM’s extensive knowledge and robust reasoning capabilities. In this survey paper, we investigate the utilization of generative LLMs in the realm of fact-checking, illustrating various approaches that have been employed and techniques for prompting or fine-tuning LLMs. By providing an overview of existing approaches, this survey aims to improve the understanding of utilizing LLMs in fact-checking and to facilitate further progress in LLMs’ involvement in this process.

Similar Work