Rethinking The Value Of Transformer Components · The Large Language Model Bible Contribute to LLM-Bible

Rethinking The Value Of Transformer Components

Wang Wenxuan, Tu Zhaopeng. Arxiv 2020

[Paper]    
Model Architecture Pretraining Methods Training Techniques Transformer

Transformer becomes the state-of-the-art translation model, while it is not well studied how each intermediate component contributes to the model performance, which poses significant challenges for designing optimal architectures. In this work, we bridge this gap by evaluating the impact of individual component (sub-layer) in trained Transformer models from different perspectives. Experimental results across language pairs, training strategies, and model capacities show that certain components are consistently more important than the others. We also report a number of interesting findings that might help humans better analyze, understand and improve Transformer models. Based on these observations, we further propose a new training strategy that can improves translation performance by distinguishing the unimportant components in training.

Similar Work