Quip: 2-bit Quantization Of Large Language Models With Guarantees
Jerry Chee, Yaohui Cai, Volodymyr Kuleshov, Christopher De Sa. Arxiv 2023
[Paper]
[Code]
Efficiency And Optimization
Has Code
Quantization
Training Techniques
This work studies post-training parameter quantization in large language
models (LLMs). We introduce quantization with incoherence processing (QuIP), a
new method based on the insight that quantization benefits from
weight and Hessian matrices, i.e., from the weights being
even in magnitude and the directions in which it is important to round them
accurately being unaligned with the coordinate axes. QuIP consists of two
steps: (1) an adaptive rounding procedure minimizing a quadratic proxy
objective; (2) efficient pre- and post-processing that ensures weight and
Hessian incoherence via multiplication by random orthogonal matrices. We
complement QuIP with the first theoretical analysis for an LLM-scale
quantization algorithm, and show that our theory also applies to an existing
method, OPTQ. Empirically, we find that our incoherence preprocessing improves
several existing quantization algorithms and yields the first LLM quantization
methods that produce viable results using only two bits per weight. Our code
can be found at https://github.com/Cornell-RelaxML/QuIP.
Similar Work