Exploring Versatile Generative Language Model Via Parameter-efficient Transfer Learning · The Large Language Model Bible Contribute to LLM-Bible

Exploring Versatile Generative Language Model Via Parameter-efficient Transfer Learning

Zhaojiang Lin, Andrea Madotto, Pascale Fung. Arxiv 2020 – 34 citations

[Paper]    
Fine-Tuning Training Techniques

Fine-tuning pre-trained generative language models to down-stream language generation tasks has shown promising results. However, this comes with the cost of having a single, large model for each task, which is not ideal in low-memory/power scenarios (e.g., mobile). In this paper, we propose an effective way to fine-tune multiple down-stream generation tasks simultaneously using a single, large pre-trained model. The experiments on five diverse language generation tasks show that by just using an additional 2-3% parameters for each task, our model can maintain or even improve the performance of fine-tuning the whole model.

Similar Work