Evaluation Of The Programming Skills Of Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Evaluation Of The Programming Skills Of Large Language Models

Heitz Luc Bryan, Chamas Joun, Scherb Christopher. Arxiv 2024

[Paper]    
Applications Efficiency And Optimization GPT Model Architecture Reinforcement Learning

The advent of Large Language Models (LLM) has revolutionized the efficiency and speed with which tasks are completed, marking a significant leap in productivity through technological innovation. As these chatbots tackle increasingly complex tasks, the challenge of assessing the quality of their outputs has become paramount. This paper critically examines the output quality of two leading LLMs, OpenAI’s ChatGPT and Google’s Gemini AI, by comparing the quality of programming code generated in both their free versions. Through the lens of a real-world example coupled with a systematic dataset, we investigate the code quality produced by these LLMs. Given their notable proficiency in code generation, this aspect of chatbot capability presents a particularly compelling area for analysis. Furthermore, the complexity of programming code often escalates to levels where its verification becomes a formidable task, underscoring the importance of our study. This research aims to shed light on the efficacy and reliability of LLMs in generating high-quality programming code, an endeavor that has significant implications for the field of software development and beyond.

Similar Work