Super Tiny Language Models · The Large Language Model Bible Contribute to LLM-Bible

Super Tiny Language Models

Hillier Dylan, Guertler Leon, Tan Cheston, Agrawal Palaash, Ruirui Chen, Cheng Bobby. Arxiv 2024

[Paper]    
Applications Model Architecture Pretraining Methods Tokenization Tools Training Techniques Transformer

The rapid advancement of large language models (LLMs) has led to significant improvements in natural language processing but also poses challenges due to their high computational and energy demands. This paper introduces a series of research efforts focused on Super Tiny Language Models (STLMs), which aim to deliver high performance with significantly reduced parameter counts. We explore innovative techniques such as byte-level tokenization with a pooling mechanism, weight tying, and efficient training strategies. These methods aim to significantly reduce reduce the parameter count compared to traditional models – in future works, we aim to build on these in a way that maintains and improves upon the performance of base transformer models. This series of papers will explore into various subproblems, including tokenizer-free models, self-play based training, and alternative training objectives. We will target models with 10M, 50M, and 100M parameters. Our ultimate goal is to make high-performance language models more accessible and practical for a wide range of applications.

Similar Work