Low-cost Language Models: Survey And Performance Evaluation On Python Code Generation · The Large Language Model Bible Contribute to LLM-Bible

Low-cost Language Models: Survey And Performance Evaluation On Python Code Generation

Espejel Jessica López, Alassan Mahaman Sanoussi Yahaya, Bouhandi Merieme, Dahhane Walid, Ettifouri El Hassane. Arxiv 2024

[Paper]    
Applications GPT Model Architecture Prompting Reinforcement Learning Survey Paper

Large Language Models (LLMs) have become a popular choice for many Natural Language Processing (NLP) tasks due to their versatility and ability to produce high-quality results. Specifically, they are increasingly used for automatic code generation to help developers tackle repetitive coding tasks. However, LLMs’ substantial computational and memory requirements often make them inaccessible to users with limited resources. This paper focuses on very low-cost models which offer a more accessible alternative to resource-intensive LLMs. We notably: (1) propose a thorough semi-manual evaluation of their performance in generating Python code, (2) introduce a Chain-of-Thought (CoT) prompting strategy to improve model reasoning and code quality, and (3) propose a new dataset of 60 programming problems, with varied difficulty levels, designed to extend existing benchmarks like HumanEval and EvalPlus. Our findings show that some low-cost compatible models achieve competitive results compared to larger models like ChatGPT despite using significantly fewer resources. We will make our dataset and prompts publicly available to support further research.

Similar Work