Is GPT-4 A Reliable Rater? Evaluating Consistency In GPT-4 Text Ratings · The Large Language Model Bible Contribute to LLM-Bible

Is GPT-4 A Reliable Rater? Evaluating Consistency In GPT-4 Text Ratings

Hackl Veronika, Müller Alexandra Elena, Granitzer Michael, Sailer Maximilian. Arxiv 2023

[Paper]    
Applications GPT Model Architecture Prompting RAG Security Uncategorized

This study investigates the consistency of feedback ratings generated by OpenAI’s GPT-4, a state-of-the-art artificial intelligence language model, across multiple iterations, time spans and stylistic variations. The model rated responses to tasks within the Higher Education (HE) subject domain of macroeconomics in terms of their content and style. Statistical analysis was conducted in order to learn more about the interrater reliability, consistency of the ratings across iterations and the correlation between ratings in terms of content and style. The results revealed a high interrater reliability with ICC scores ranging between 0.94 and 0.99 for different timespans, suggesting that GPT-4 is capable of generating consistent ratings across repetitions with a clear prompt. Style and content ratings show a high correlation of 0.87. When applying a non-adequate style the average content ratings remained constant, while style ratings decreased, which indicates that the large language model (LLM) effectively distinguishes between these two criteria during evaluation. The prompt used in this study is furthermore presented and explained. Further research is necessary to assess the robustness and reliability of AI models in various use cases.

Similar Work