Exploring How Multiple Levels Of Gpt-generated Programming Hints Support Or Disappoint Novices · The Large Language Model Bible Contribute to LLM-Bible

Exploring How Multiple Levels Of Gpt-generated Programming Hints Support Or Disappoint Novices

Xiao Ruiwei, Hou Xinying, Stamper John. Arxiv 2024

[Paper]    
GPT Model Architecture Reinforcement Learning

Recent studies have integrated large language models (LLMs) into diverse educational contexts, including providing adaptive programming hints, a type of feedback focuses on helping students move forward during problem-solving. However, most existing LLM-based hint systems are limited to one single hint type. To investigate whether and how different levels of hints can support students’ problem-solving and learning, we conducted a think-aloud study with 12 novices using the LLM Hint Factory, a system providing four levels of hints from general natural language guidance to concrete code assistance, varying in format and granularity. We discovered that high-level natural language hints alone can be helpless or even misleading, especially when addressing next-step or syntax-related help requests. Adding lower-level hints, like code examples with in-line comments, can better support students. The findings open up future work on customizing help responses from content, format, and granularity levels to accurately identify and meet students’ learning needs.

Similar Work