Evaluating Text Summaries Generated By Large Language Models Using Openai's GPT · The Large Language Model Bible Contribute to LLM-Bible

Evaluating Text Summaries Generated By Large Language Models Using Openai's GPT

Shakil Hassan, Mahi Atqiya Munawara, Nguyen Phuoc, Ortiz Zeydy, Mardini Mamoun T.. Arxiv 2024

[Paper]    
BERT GPT Model Architecture Pretraining Methods Reinforcement Learning Transformer

This research examines the effectiveness of OpenAI’s GPT models as independent evaluators of text summaries generated by six transformer-based models from Hugging Face: DistilBART, BERT, ProphetNet, T5, BART, and PEGASUS. We evaluated these summaries based on essential properties of high-quality summary - conciseness, relevance, coherence, and readability - using traditional metrics such as ROUGE and Latent Semantic Analysis (LSA). Uniquely, we also employed GPT not as a summarizer but as an evaluator, allowing it to independently assess summary quality without predefined metrics. Our analysis revealed significant correlations between GPT evaluations and traditional metrics, particularly in assessing relevance and coherence. The results demonstrate GPT’s potential as a robust tool for evaluating text summaries, offering insights that complement established metrics and providing a basis for comparative analysis of transformer-based models in natural language processing tasks.

Similar Work