Unmasking The Giant: A Comprehensive Evaluation Of Chatgpt's Proficiency In Coding Algorithms And Data Structures · The Large Language Model Bible Contribute to LLM-Bible

Unmasking The Giant: A Comprehensive Evaluation Of Chatgpt's Proficiency In Coding Algorithms And Data Structures

Arefin Sayed Erfan, Heya Tasnia Ashrafi, Al-qudah Hasan, Ineza Ynes, Serwadda Abdul. Proceedings of the 2023

[Paper]    
GPT Model Architecture Reinforcement Learning Tools

The transformative influence of Large Language Models (LLMs) is profoundly reshaping the Artificial Intelligence (AI) technology domain. Notably, ChatGPT distinguishes itself within these models, demonstrating remarkable performance in multi-turn conversations and exhibiting code proficiency across an array of languages. In this paper, we carry out a comprehensive evaluation of ChatGPT’s coding capabilities based on what is to date the largest catalog of coding challenges. Our focus is on the python programming language and problems centered on data structures and algorithms, two topics at the very foundations of Computer Science. We evaluate ChatGPT for its ability to generate correct solutions to the problems fed to it, its code quality, and nature of run-time errors thrown by its code. Where ChatGPT code successfully executes, but fails to solve the problem at hand, we look into patterns in the test cases passed in order to gain some insights into how wrong ChatGPT code is in these kinds of situations. To infer whether ChatGPT might have directly memorized some of the data that was used to train it, we methodically design an experiment to investigate this phenomena. Making comparisons with human performance whenever feasible, we investigate all the above questions from the context of both its underlying learning models (GPT-3.5 and GPT-4), on a vast array sub-topics within the main topics, and on problems having varying degrees of difficulty.

Similar Work