Rolora: Fine-tuning Rotated Outlier-free Llms For Effective Weight-activation Quantization · The Large Language Model Bible Contribute to LLM-Bible

Rolora: Fine-tuning Rotated Outlier-free Llms For Effective Weight-activation Quantization

Huang Xijie, Liu Zechun, Liu Shih-yang, Cheng Kwang-ting. Arxiv 2024

[Paper] [Code]    
Efficiency And Optimization Fine Tuning Has Code Multimodal Models Pretraining Methods Quantization Security Training Techniques

Low-Rank Adaptation (LoRA), as a representative Parameter-Efficient Fine-Tuning (PEFT)method, significantly enhances the training efficiency by updating only a small portion of the weights in Large Language Models (LLMs). Recently, weight-only quantization techniques have also been applied to LoRA methods to reduce the memory footprint of fine-tuning. However, applying weight-activation quantization to the LoRA pipeline is under-explored, and we observe substantial performance degradation primarily due to the presence of activation outliers. In this work, we propose RoLoRA, the first LoRA-based scheme for effective weight-activation quantization. RoLoRA utilizes rotation for outlier elimination and proposes rotation-aware fine-tuning to preserve the outlier-free characteristics in rotated LLMs. Experimental results show RoLoRA consistently improves low-bit LoRA convergence and post-training quantization robustness in weight-activation settings. We evaluate RoLoRA across LLaMA2-7B/13B, LLaMA3-8B models, achieving up to 29.5% absolute accuracy gain of 4-bit weight-activation quantized LLaMA2- 13B on commonsense reasoning tasks compared to LoRA baseline. We further demonstrate its effectiveness on Large Multimodal Models (LLaVA-1.5-7B). Codes are available at https://github.com/HuangOwen/RoLoRA

Similar Work