Comparative Analysis Of Different Efficient Fine Tuning Methods Of Large Language Models (llms) In Low-resource Setting · The Large Language Model Bible Contribute to LLM-Bible

Comparative Analysis Of Different Efficient Fine Tuning Methods Of Large Language Models (llms) In Low-resource Setting

Srinivasan Krishna Prasad Varadarajan, Gumpena Prasanth, Yattapu Madhusudhana, Brahmbhatt Vishal H.. Arxiv 2024

[Paper]    
Distillation Efficiency And Optimization Few Shot Fine Tuning In Context Learning Pretraining Methods Prompting Reinforcement Learning Training Techniques

In the domain of large language models (LLMs), arXiv:2305.16938 showed that few-shot full-model fine-tuning – namely Vanilla Fine Tuning (FT) and Pattern-Based Fine Tuning (PBFT) –, and In-Context Learning (ICL) generalize similarly on Out-Of-Domain (OOD) datasets, but vary in terms of task adaptation. However, they both pose challenges, especially in term of memory requirements. In this paper, we further try to push the understanding of different fine-tuning strategies for LLM and aim to bring a myriad of these on the same pedestal for an elaborate comparison with full-model fine-tuning on two diverse datasets. To that end, we conducted a series of experiments, beginning with state-of-the-art methods like vanilla fine-tuning and Pattern-Based Fine-Tuning (PBFT) on pre-trained models across two datasets, COLA and MNLI. We then investigate adaptive fine-tuning and the efficiency of LoRA adapters in a few-shot setting. Finally, we also compare an alternative approach that has gained recent popularity – context distillation – with the vanilla FT and PBFT with and without few-shot setup. Our findings suggest that these alternative strategies that we explored can exhibit out-of-domain generalization comparable to that of vanilla FT and PBFT. PBFT under-performs Vanilla FT on out-of-domain (OOD) data, emphasizing the need for effective prompts. Further, our adaptive-fine tuning and LoRA experiments perform comparable or slightly worse than the standard fine-tunings as anticipated, since standard fine-tunings involve tuning the entire model. Finally, our context distillation experiments out-perform the standard fine-tuning methods. These findings underscore that eventually the choice of an appropriate fine-tuning method depends on the available resources (memory, compute, data) and task adaptability.

Similar Work