Large Language Models As Evaluators For Scientific Synthesis · The Large Language Model Bible Contribute to LLM-Bible

Large Language Models As Evaluators For Scientific Synthesis

Evans Julia, D'souza Jennifer, Auer Sören. Arxiv 2024

[Paper]    
GPT Interpretability And Explainability Model Architecture

Our study explores how well the state-of-the-art Large Language Models (LLMs), like GPT-4 and Mistral, can assess the quality of scientific summaries or, more fittingly, scientific syntheses, comparing their evaluations to those of human annotators. We used a dataset of 100 research questions and their syntheses made by GPT-4 from abstracts of five related papers, checked against human quality ratings. The study evaluates both the closed-source GPT-4 and the open-source Mistral model’s ability to rate these summaries and provide reasons for their judgments. Preliminary results show that LLMs can offer logical explanations that somewhat match the quality ratings, yet a deeper statistical analysis shows a weak correlation between LLM and human ratings, suggesting the potential and current limitations of LLMs in scientific synthesis evaluation.

Similar Work