Autocoder: Enhancing Code Large Language Model With \textsc{aiev-instruct} · The Large Language Model Bible Contribute to LLM-Bible

Autocoder: Enhancing Code Large Language Model With \textsc{aiev-instruct}

Lei Bin, Li Yuchen, Chen Qiuwu. Arxiv 2024

[Paper] [Code]    
Agentic GPT Has Code Model Architecture Reinforcement Learning Training Techniques

We introduce AutoCoder, the first Large Language Model to surpass GPT-4 Turbo (April 2024) and GPT-4o in pass@1 on the Human Eval benchmark test (\(\mathbf{90.9%}\) vs. \(\mathbf{90.2%}\)). In addition, AutoCoder offers a more versatile code interpreter compared to GPT-4 Turbo and GPT-4o. It’s code interpreter can install external packages instead of limiting to built-in packages. AutoCoder’s training data is a multi-turn dialogue dataset created by a system combining agent interaction and external code execution verification, a method we term \textbf{\textsc{AIEV-Instruct}} (Instruction Tuning with Agent-Interaction and Execution-Verified). Compared to previous large-scale code dataset generation methods, \textsc{AIEV-Instruct} reduces dependence on proprietary large models and provides execution-validated code dataset. The code and the demo video is available in \url{https://github.com/bin123apple/AutoCoder}.

Similar Work