On Learning To Summarize With Large Language Models As References · The Large Language Model Bible Contribute to LLM-Bible

On Learning To Summarize With Large Language Models As References

Liu Yixin, Shi Kejian, He Katherine S, Ye Longtian, Fabbri Alexander R., Liu Pengfei, Radev Dragomir, Cohan Arman. Arxiv 2023

[Paper]    
Applications Fine Tuning Pretraining Methods RAG Reinforcement Learning Survey Paper Training Techniques

Recent studies have found that summaries generated by large language models (LLMs) are favored by human annotators over the original reference summaries in commonly used summarization datasets. Therefore, we study an LLM-as-reference learning setting for smaller text summarization models to investigate whether their performance can be substantially improved. To this end, we use LLMs as both oracle summary generators for standard supervised fine-tuning and oracle summary evaluators for efficient contrastive learning that leverages the LLMs’ supervision signals. We conduct comprehensive experiments with source news articles and find that (1) summarization models trained under the LLM-as-reference setting achieve significant performance improvement in both LLM and human evaluations; (2) contrastive learning outperforms standard supervised fine-tuning under both low and high resource settings. Our experimental results also enable a meta-analysis of LLMs’ summary evaluation capacities under a challenging setting, showing that LLMs are not well-aligned with human evaluators. Particularly, our expert human evaluation reveals remaining nuanced performance gaps between LLMs and our fine-tuned models, which LLMs fail to capture. Thus, we call for further studies into both the potential and challenges of using LLMs in summarization model development.

Similar Work