Grounding Data Science Code Generation With Input-output Specifications · The Large Language Model Bible Contribute to LLM-Bible

Grounding Data Science Code Generation With Input-output Specifications

Wen Yeming, Yin Pengcheng, Shi Kensen, Michalewski Henryk, Chaudhuri Swarat, Polozov Alex. Arxiv 2024

[Paper]    
Applications Fine Tuning Pretraining Methods Prompting RAG Reinforcement Learning Training Techniques

Large language models (LLMs) have recently demonstrated a remarkable ability to generate code from natural language (NL) prompts. However, in the real world, NL is often too ambiguous to capture the true intent behind programming problems, requiring additional input-output (I/O) specifications. Unfortunately, LLMs can have difficulty aligning their outputs with both the NL prompt and the I/O specification. In this paper, we give a way to mitigate this issue in the context of data science programming, where tasks require explicit I/O specifications for clarity. Specifically, we propose GIFT4Code, a novel approach for the instruction fine-tuning of LLMs with respect to I/O specifications. Our method leverages synthetic data produced by the LLM itself and utilizes execution-derived feedback as a key learning signal. This feedback, in the form of program I/O specifications, is provided to the LLM to facilitate instruction fine-tuning. We evaluated our approach on two challenging data science benchmarks, Arcade and DS-1000. The results demonstrate a significant improvement in the LLM’s ability to generate code that is not only executable but also accurately aligned with user specifications, substantially improving the quality of code generation for complex data science tasks.

Similar Work