TADA: Efficient Task-agnostic Domain Adaptation For Transformers · The Large Language Model Bible Contribute to LLM-Bible

TADA: Efficient Task-agnostic Domain Adaptation For Transformers

Hung Chia-chien, Lange Lukas, Strötgen Jannik. Arxiv 2023

[Paper]    
Applications Efficiency And Optimization Fine Tuning Model Architecture Pretraining Methods Training Techniques Transformer

Intermediate training of pre-trained transformer-based language models on domain-specific data leads to substantial gains for downstream tasks. To increase efficiency and prevent catastrophic forgetting alleviated from full domain-adaptive pre-training, approaches such as adapters have been developed. However, these require additional parameters for each layer, and are criticized for their limited expressiveness. In this work, we introduce TADA, a novel task-agnostic domain adaptation method which is modular, parameter-efficient, and thus, data-efficient. Within TADA, we retrain the embeddings to learn domain-aware input representations and tokenizers for the transformer encoder, while freezing all other parameters of the model. Then, task-specific fine-tuning is performed. We further conduct experiments with meta-embeddings and newly introduced meta-tokenizers, resulting in one model per task in multi-domain use cases. Our broad evaluation in 4 downstream tasks for 14 domains across single- and multi-domain setups and high- and low-resource scenarios reveals that TADA is an effective and efficient alternative to full domain-adaptive pre-training and adapters for domain adaptation, while not introducing additional parameters or complex training steps.

Similar Work