Llmem: Estimating GPU Memory Usage For Fine-tuning Pre-trained Llms · The Large Language Model Bible Contribute to LLM-Bible

Llmem: Estimating GPU Memory Usage For Fine-tuning Pre-trained Llms

Kim Taeho, Wang Yanming, Chaturvedi Vatshank, Gupta Lokesh, Kim Seyeon, Kwon Yongin, Ha Sangtae. Arxiv 2024

[Paper]    
Fine Tuning Model Architecture Pretraining Methods RAG Tools Training Techniques Transformer

Fine-tuning pre-trained large language models (LLMs) with limited hardware presents challenges due to GPU memory constraints. Various distributed fine-tuning methods have been proposed to alleviate memory constraints on GPU. However, determining the most effective method for achieving rapid fine-tuning while preventing GPU out-of-memory issues in a given environment remains unclear. To address this challenge, we introduce LLMem, a solution that estimates the GPU memory consumption when applying distributed fine-tuning methods across multiple GPUs and identifies the optimal method. We conduct GPU memory usage estimation prior to fine-tuning, leveraging the fundamental structure of transformer-based decoder models and the memory usage distribution of each method. Experimental results show that LLMem accurately estimates peak GPU memory usage on a single GPU, with error rates of up to 1.6%. Additionally, it shows an average error rate of 3.0% when applying distributed fine-tuning methods to LLMs with more than a billion parameters on multi-GPU setups.

Similar Work