Natural Language To Code Using Transformers · The Large Language Model Bible Contribute to LLM-Bible

Natural Language To Code Using Transformers

Kusupati Uday, Ailavarapu Venkata Ravi Teja. Arxiv 2022

[Paper]    
Attention Mechanism Model Architecture Pretraining Methods Transformer

We tackle the problem of generating code snippets from natural language descriptions using the CoNaLa dataset. We use the self-attention based transformer architecture and show that it performs better than recurrent attention-based encoder decoder. Furthermore, we develop a modified form of back translation and use cycle consistent losses to train the model in an end-to-end fashion. We achieve a BLEU score of 16.99 beating the previously reported baseline of the CoNaLa challenge.

Similar Work