Large Language Model As An Assignment Evaluator: Insights, Feedback, And Challenges In A 1000+ Student Course · The Large Language Model Bible Contribute to LLM-Bible

Large Language Model As An Assignment Evaluator: Insights, Feedback, And Challenges In A 1000+ Student Course

Chiang Cheng-han, Chen Wei-chih, Kuan Chun-yi, Yang Chienchou, Lee Hung-yi. Arxiv 2024

[Paper]    
GPT Model Architecture Reinforcement Learning

Using large language models (LLMs) for automatic evaluation has become an important evaluation method in NLP research. However, it is unclear whether these LLM-based evaluators can be applied in real-world classrooms to assess student assignments. This empirical report shares how we use GPT-4 as an automatic assignment evaluator in a university course with 1,028 students. Based on student responses, we find that LLM-based assignment evaluators are generally acceptable to students when students have free access to these LLM-based evaluators. However, students also noted that the LLM sometimes fails to adhere to the evaluation instructions. Additionally, we observe that students can easily manipulate the LLM-based evaluator to output specific strings, allowing them to achieve high scores without meeting the assignment rubric. Based on student feedback and our experience, we provide several recommendations for integrating LLM-based evaluators into future classrooms.

Similar Work