On The Impacts Of Contexts On Repository-level Code Generation · The Large Language Model Bible Contribute to LLM-Bible

On The Impacts Of Contexts On Repository-level Code Generation

Hai Nam Le, Nguyen Dung Manh, Bui Nghi D. Q.. Arxiv 2024

[Paper] [Code]    
Applications Has Code RAG Reinforcement Learning Tools

CodeLLMs have gained widespread adoption for code generation tasks, yet their capacity to handle repository-level code generation with complex contextual dependencies remains underexplored. Our work underscores the critical importance of leveraging repository-level contexts to generate executable and functionally correct code. We present \textbf{\methodnamews}, a novel benchmark designed to evaluate repository-level code generation, with a focus on three key aspects: executability, functional correctness through comprehensive test case generation, and accurate utilization of cross-file contexts. Our study examines a controlled scenario where developers specify essential code dependencies (contexts), challenging models to integrate them effectively. Additionally, we introduce an instruction-tuned dataset that enhances CodeLLMs’ ability to leverage dependencies, along with a new metric, \textit{Dependency Invocation Rate (DIR)}, to quantify context utilization. Experimental results reveal that while pretrained LLMs demonstrate superior performance in terms of correctness, instruction-tuned models excel in context utilization and debugging capabilities. \methodnamews offers a comprehensive evaluation framework for assessing code functionality and alignment with developer intent, thereby advancing the development of more reliable CodeLLMs for real-world applications. The dataset and source code are available at~\url{https://github.com/FSoft-AI4Code/RepoExec}.

Similar Work