SDQ: Sparse Decomposed Quantization For LLM Inference · The Large Language Model Bible Contribute to LLM-Bible

SDQ: Sparse Decomposed Quantization For LLM Inference

Jeong Geonhwa, Tsai Po-an, Keckler Stephen W., Krishna Tushar. Arxiv 2024

[Paper]    
Efficiency And Optimization Prompting Quantization

Recently, large language models (LLMs) have shown surprising performance in task-specific workloads as well as general tasks with the given prompts. However, to achieve unprecedented performance, recent LLMs use billions to trillions of parameters, which hinder the wide adaptation of those models due to their extremely large compute and memory requirements. To resolve the issue, various model compression methods are being actively investigated. In this work, we propose SDQ (Sparse Decomposed Quantization) to exploit both structured sparsity and quantization to achieve both high compute and memory efficiency. From our evaluations, we observe that SDQ can achieve 4x effective compute throughput with <1% quality drop.

Similar Work