Reallm: A General Framework For LLM Compression And Fine-tuning · The Large Language Model Bible Contribute to LLM-Bible

Reallm: A General Framework For LLM Compression And Fine-tuning

Leconte Louis, Bedin Lisa, Nguyen Van Minh, Moulines Eric. Arxiv 2024

[Paper]    
Efficiency And Optimization Fine Tuning Pretraining Methods Quantization Tools Training Techniques

We introduce ReALLM, a novel approach for compression and memory-efficient adaptation of pre-trained language models that encompasses most of the post-training quantization and fine-tuning methods for a budget of <4 bits. Pre-trained matrices are decomposed into a high-precision low-rank component and a vector-quantized latent representation (using an autoencoder). During the fine-tuning step, only the low-rank components are updated. Our results show that pre-trained matrices exhibit different patterns. ReALLM adapts the shape of the encoder (small/large embedding, high/low bit VQ, etc.) to each matrix. ReALLM proposes to represent each matrix with a small embedding on \(b\) bits and a neural decoder model \(\mathcal{D}\phi\) with its weights on \(b\phi\) bits. The decompression of a matrix requires only one embedding and a single forward pass with the decoder. Our weight-only quantization algorithm yields the best results on language generation tasks (C4 and WikiText-2) for a budget of \(3\) bits without any training. With a budget of \(2\) bits, ReALLM achieves state-of-the art performance after fine-tuning on a small calibration dataset.

Similar Work