Exploring Quantization For Efficient Pre-training Of Transformer Language Models · The Large Language Model Bible Contribute to LLM-Bible

Exploring Quantization For Efficient Pre-training Of Transformer Language Models

Chitsaz Kamran, Fournier Quentin, Mordido Gonçalo, Chandar Sarath. Arxiv 2024

[Paper] [Code]    
Efficiency And Optimization Fine Tuning Has Code Language Modeling Model Architecture Pretraining Methods Quantization Training Techniques Transformer

The increasing scale of Transformer models has led to an increase in their pre-training computational requirements. While quantization has proven to be effective after pre-training and during fine-tuning, applying quantization in Transformers during pre-training has remained largely unexplored at scale for language modeling. This study aims to explore the impact of quantization for efficient pre-training of Transformers, with a focus on linear layer components. By systematically applying straightforward linear quantization to weights, activations, gradients, and optimizer states, we assess its effects on model efficiency, stability, and performance during training. By offering a comprehensive recipe of effective quantization strategies to be applied during the pre-training of Transformers, we promote high training efficiency from scratch while retaining language modeling ability. Code is available at https://github.com/chandar-lab/EfficientLLMs.

Similar Work