Starcoder: May The Source Be With You! · The Large Language Model Bible Contribute to LLM-Bible

Starcoder: May The Source Be With You!

Li Raymond, Allal Loubna Ben, Zi Yangtian, Muennighoff Niklas, Kocetkov Denis, Mou Chenghao, Marone Marc, Akiki Christopher, Li Jia, Chim Jenny, Liu Qian, Zheltonozhskii Evgenii, Zhuo Terry Yue, Wang Thomas, Dehaene Olivier, Davaadorj Mishig, Lamy-poirier Joel, Monteiro João, Shliazhko Oleh, Gontier Nicolas, Meade Nicholas, Zebaze Armel, Yee Ming-ho, Umapathi Logesh Kumar, Zhu Jian, Lipkin Benjamin, Oblokulov Muhtasham, Wang Zhiruo, Murthy Rudra, Stillerman Jason, Patel Siva Sankalp, Abulkhanov Dmitry, Zocca Marco, Dey Manan, Zhang Zhihan, Fahmy Nour, Bhattacharyya Urvashi, Yu Wenhao, Singh Swayam, Luccioni Sasha, Villegas Paulo, Kunakov Maxim, Zhdanov Fedor, Romero Manuel, Lee Tony, Timor Nadav, Ding Jennifer, Schlesinger Claire, Schoelkopf Hailey, Ebert Jan, Dao Tri, Mishra Mayank, Gu Alex, Robinson Jennifer, Anderson Carolyn Jane, Dolan-gavitt Brendan, Contractor Danish, Reddy Siva, Fried Daniel, Bahdanau Dzmitry, Jernite Yacine, Ferrandis Carlos Muñoz, Hughes Sean, Wolf Thomas, Guha Arjun, Von Werra Leandro, De Vries Harm. Arxiv 2023

[Paper]    
Attention Mechanism Ethics And Bias Model Architecture Prompting Reinforcement Learning Responsible AI Tools

The BigCode community, an open-scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs), introduces StarCoder and StarCoderBase: 15.5B parameter models with 8K context length, infilling capabilities and fast large-batch inference enabled by multi-query attention. StarCoderBase is trained on 1 trillion tokens sourced from The Stack, a large collection of permissively licensed GitHub repositories with inspection tools and an opt-out process. We fine-tuned StarCoderBase on 35B Python tokens, resulting in the creation of StarCoder. We perform the most comprehensive evaluation of Code LLMs to date and show that StarCoderBase outperforms every open Code LLM that supports multiple programming languages and matches or outperforms the OpenAI code-cushman-001 model. Furthermore, StarCoder outperforms every model that is fine-tuned on Python, can be prompted to achieve 40% pass@1 on HumanEval, and still retains its performance on other programming languages. We take several important steps towards a safe open-access model release, including an improved PII redaction pipeline and a novel attribution tracing tool, and make the StarCoder models publicly available under a more commercially viable version of the Open Responsible AI Model license.

Similar Work