Is GPT-4 Alone Sufficient For Automated Essay Scoring?: A Comparative Judgment Approach Based On Rater Cognition · The Large Language Model Bible Contribute to LLM-Bible

Is GPT-4 Alone Sufficient For Automated Essay Scoring?: A Comparative Judgment Approach Based On Rater Cognition

Kim Seungju, Jo Meounggun. Arxiv 2024

[Paper]    
Few Shot Fine Tuning GPT Model Architecture Pretraining Methods Prompting Reinforcement Learning Training Techniques

Large Language Models (LLMs) have shown promise in Automated Essay Scoring (AES), but their zero-shot and few-shot performance often falls short compared to state-of-the-art models and human raters. However, fine-tuning LLMs for each specific task is impractical due to the variety of essay prompts and rubrics used in real-world educational contexts. This study proposes a novel approach combining LLMs and Comparative Judgment (CJ) for AES, using zero-shot prompting to choose between two essays. We demonstrate that a CJ method surpasses traditional rubric-based scoring in essay scoring using LLMs.

Similar Work