SPAFIT: Stratified Progressive Adaptation Fine-tuning For Pre-trained Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

SPAFIT: Stratified Progressive Adaptation Fine-tuning For Pre-trained Large Language Models

Arora Samir, Wang Liangliang. Arxiv 2024

[Paper]    
Fine Tuning Model Architecture Pretraining Methods RAG Training Techniques Transformer

Full fine-tuning is a popular approach to adapt Transformer-based pre-trained large language models to a specific downstream task. However, the substantial requirements for computational power and storage have discouraged its widespread use. Moreover, increasing evidence of catastrophic forgetting and overparameterization in the Transformer architecture has motivated researchers to seek more efficient fine-tuning (PEFT) methods. Commonly known parameter-efficient fine-tuning methods like LoRA and BitFit are typically applied across all layers of the model. We propose a PEFT method, called Stratified Progressive Adaptation Fine-tuning (SPAFIT), based on the localization of different types of linguistic knowledge to specific layers of the model. Our experiments, conducted on nine tasks from the GLUE benchmark, show that our proposed SPAFIT method outperforms other PEFT methods while fine-tuning only a fraction of the parameters adjusted by other methods.

Similar Work