Synchromesh: Reliable Code Generation From Pre-trained Language Models · The Large Language Model Bible Contribute to LLM-Bible

Synchromesh: Reliable Code Generation From Pre-trained Language Models

Poesia Gabriel, Polozov Oleksandr, Le Vu, Tiwari Ashish, Soares Gustavo, Meek Christopher, Gulwani Sumit. Arxiv 2022

[Paper]    
Applications Few Shot Fine Tuning GPT Model Architecture Pretraining Methods RAG Reinforcement Learning Tools Training Techniques

Large pre-trained language models have been used to generate code,providing a flexible interface for synthesizing programs from natural language specifications. However, they often violate syntactic and semantic rules of their output language, limiting their practical usability. In this paper, we propose Synchromesh: a framework for substantially improving the reliability of pre-trained models for code generation. Synchromesh comprises two components. First, it retrieves few-shot examples from a training bank using Target Similarity Tuning (TST), a novel method for semantic example selection. TST learns to recognize utterances that describe similar target programs despite differences in surface natural language features. Then, Synchromesh feeds the examples to a pre-trained language model and samples programs using Constrained Semantic Decoding (CSD): a general framework for constraining the output to a set of valid programs in the target language. CSD leverages constraints on partial outputs to sample complete correct programs, and needs neither re-training nor fine-tuning of the language model. We evaluate our methods by synthesizing code from natural language descriptions using GPT-3 and Codex in three real-world languages: SQL queries, Vega-Lite visualizations and SMCalFlow programs. These domains showcase rich constraints that CSD is able to enforce, including syntax, scope, typing rules, and contextual logic. We observe substantial complementary gains from CSD and TST in prediction accuracy and in effectively preventing run-time errors.

Similar Work