Efficient Language Modeling With Sparse All-mlp · The Large Language Model Bible Contribute to LLM-Bible

Efficient Language Modeling With Sparse All-mlp

Yu Ping, Artetxe Mikel, Ott Myle, Shleifer Sam, Gong Hongyu, Stoyanov Ves, Li Xian. Arxiv 2022

[Paper]    
Attention Mechanism Efficiency And Optimization In Context Learning Language Modeling Model Architecture Pretraining Methods Prompting Training Techniques Transformer

All-MLP architectures have attracted increasing interest as an alternative to attention-based models. In NLP, recent work like gMLP shows that all-MLPs can match Transformers in language modeling, but still lag behind in downstream tasks. In this work, we analyze the limitations of MLPs in expressiveness, and propose sparsely activated MLPs with mixture-of-experts (MoEs) in both feature and input (token) dimensions. Such sparse all-MLPs significantly increase model capacity and expressiveness while keeping the compute constant. We address critical challenges in incorporating conditional computation with two routing strategies. The proposed sparse all-MLP improves language modeling perplexity and obtains up to 2\(\times\) improvement in training efficiency compared to both Transformer-based MoEs (GShard, Switch Transformer, Base Layers and HASH Layers) as well as dense Transformers and all-MLPs. Finally, we evaluate its zero-shot in-context learning performance on six downstream tasks, and find that it surpasses Transformer-based MoEs and dense Transformers.

Similar Work