MINI-SEQUENCE TRANSFORMER: Optimizing Intermediate Memory For Long Sequences Training · The Large Language Model Bible Contribute to LLM-Bible

MINI-SEQUENCE TRANSFORMER: Optimizing Intermediate Memory For Long Sequences Training

Luo Cheng, Zhao Jiawei, Chen Zhuoming, Chen Beidi, Anandkumar Anima. Arxiv 2024

[Paper]    
Efficiency And Optimization Model Architecture Pretraining Methods Tools Training Techniques Transformer

We introduce Mini-Sequence Transformer (MsT), a simple and effective methodology for highly efficient and accurate LLM training with extremely long sequences. MsT partitions input sequences and iteratively processes mini-sequences to reduce intermediate memory usage. Integrated with activation recomputation, it enables significant memory savings in both forward and backward passes. In experiments with the Llama3-8B model, with MsT, we measure no degradation in throughput or convergence even with 12x longer sequences than standard implementations due to our careful memory optimizations. MsT is fully general, implementation-agnostic, and requires minimal code changes to integrate with existing LLM training frameworks.

Similar Work