Decoding News Narratives: A Critical Analysis Of Large Language Models In Framing Detection · The Large Language Model Bible Contribute to LLM-Bible

Decoding News Narratives: A Critical Analysis Of Large Language Models In Framing Detection

Pastorino Valeria, Sivakumar Jasivan A., Moosavi Nafise Sadat. Arxiv 2024

[Paper]    
Few Shot Fine Tuning GPT Model Architecture Pretraining Methods Reinforcement Learning Training Techniques

Previous studies on framing have relied on manual analysis or fine-tuning models with limited annotated datasets. However, pre-trained models, with their diverse training backgrounds, offer a promising alternative. This paper presents a comprehensive analysis of GPT-4, GPT-3.5 Turbo, and FLAN-T5 models in detecting framing in news headlines. We evaluated these models in various scenarios: zero-shot, few-shot with in-domain examples, cross-domain examples, and settings where models explain their predictions. Our results show that explainable predictions lead to more reliable outcomes. GPT-4 performed exceptionally well in few-shot settings but often misinterpreted emotional language as framing, highlighting a significant challenge. Additionally, the results suggest that consistent predictions across multiple models could help identify potential annotation inaccuracies in datasets. Finally, we propose a new small dataset for real-world evaluation on headlines from a diverse set of topics.

Similar Work