ISQA: Informative Factuality Feedback For Scientific Summarization · The Large Language Model Bible Contribute to LLM-Bible

ISQA: Informative Factuality Feedback For Scientific Summarization

Li Zekai, Qin Yanxia, Liu Qian, Kan Min-yen. Arxiv 2024

[Paper] [Code]    
Agentic Applications Has Code Reinforcement Learning

We propose Iterative Facuality Refining on Informative Scientific Question-Answering (ISQA) feedback\footnote{Code is available at \url{https://github.com/lizekai-richard/isqa}}, a method following human learning theories that employs model-generated feedback consisting of both positive and negative information. Through iterative refining of summaries, it probes for the underlying rationale of statements to enhance the factuality of scientific summarization. ISQA does this in a fine-grained manner by asking a summarization agent to reinforce validated statements in positive feedback and fix incorrect ones in negative feedback. Our findings demonstrate that the ISQA feedback mechanism significantly improves the factuality of various open-source LLMs on the summarization task, as evaluated across multiple scientific datasets.

Similar Work