Rocks Coding, Not Development--a Human-centric, Experimental Evaluation Of Llm-supported SE Tasks · The Large Language Model Bible Contribute to LLM-Bible

Rocks Coding, Not Development--a Human-centric, Experimental Evaluation Of Llm-supported SE Tasks

Wang Wei, Ning Huilong, Zhang Gaowei, Liu Libo, Wang Yi. Arxiv 2024

[Paper]    
GPT Model Architecture Reinforcement Learning

Recently, large language models (LLM) based generative AI has been gaining momentum for their impressive high-quality performances in multiple domains, particularly after the release of the ChatGPT. Many believe that they have the potential to perform general-purpose problem-solving in software development and replace human software developers. Nevertheless, there are in a lack of serious investigation into the capability of these LLM techniques in fulfilling software development tasks. In a controlled 2 x 2 between-subject experiment with 109 participants, we examined whether and to what degree working with ChatGPT was helpful in the coding task and typical software development task and how people work with ChatGPT. We found that while ChatGPT performed well in solving simple coding problems, its performance in supporting typical software development tasks was not that good. We also observed the interactions between participants and ChatGPT and found the relations between the interactions and the outcomes. Our study thus provides first-hand insights into using ChatGPT to fulfill software engineering tasks with real-world developers and motivates the need for novel interaction mechanisms that help developers effectively work with large language models to achieve desired outcomes.

Similar Work