Mini-model Adaptation: Efficiently Extending Pretrained Models To New Languages Via Aligned Shallow Training · The Large Language Model Bible Contribute to LLM-Bible

Mini-model Adaptation: Efficiently Extending Pretrained Models To New Languages Via Aligned Shallow Training

Marchisio Kelly, Lewis Patrick, Chen Yihong, Artetxe Mikel. Arxiv 2022

[Paper]    
Fine Tuning Masked Language Model Model Architecture Pretraining Methods RAG Tools Training Techniques Transformer

Prior work shows that it is possible to expand pretrained Masked Language Models (MLMs) to new languages by learning a new set of embeddings, while keeping the transformer body frozen. Despite learning a small subset of parameters, this approach is not compute-efficient, as training the new embeddings requires a full forward and backward pass over the entire model. We propose mini-model adaptation, a compute-efficient alternative that builds a shallow mini-model from a fraction of a large model’s parameters. New language-specific embeddings can then be efficiently trained over the mini-model and plugged into the aligned large model for rapid cross-lingual transfer. We explore two approaches to learn mini-models: MiniJoint, which jointly pretrains the primary model and the mini-model using a single transformer with a secondary MLM head at a middle layer; and MiniPost, where we start from a regular pretrained model, build a mini-model by extracting and freezing a few layers, and learn a small number of parameters on top. Experiments on XNLI, MLQA and PAWS-X show that mini-model adaptation matches the performance of the standard approach using 2.3x less compute on average.

Similar Work