Self-checker: Plug-and-play Modules For Fact-checking With Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Self-checker: Plug-and-play Modules For Fact-checking With Large Language Models

Miaoran Li, Baolin Peng, Michel Galley, Jianfeng Gao, Zhu Zhang. Arxiv 2023

[Paper]    
Fine Tuning GPT In Context Learning Model Architecture Pretraining Methods Prompting Tools Training Techniques

Fact-checking is an essential task in NLP that is commonly utilized for validating the factual accuracy of claims. Prior work has mainly focused on fine-tuning pre-trained languages models on specific datasets, which can be computationally intensive and time-consuming. With the rapid development of large language models (LLMs), such as ChatGPT and GPT-3, researchers are now exploring their in-context learning capabilities for a wide range of tasks. In this paper, we aim to assess the capacity of LLMs for fact-checking by introducing Self-Checker, a framework comprising a set of plug-and-play modules that facilitate fact-checking by purely prompting LLMs in an almost zero-shot setting. This framework provides a fast and efficient way to construct fact-checking systems in low-resource environments. Empirical results demonstrate the potential of Self-Checker in utilizing LLMs for fact-checking. However, there is still significant room for improvement compared to SOTA fine-tuned models, which suggests that LLM adoption could be a promising approach for future fact-checking research.

Similar Work