[Paper]
LLMs have seen rapid adoption in all domains. They need to be trained on
high-end high-performance computing (HPC) infrastructures and ingest massive
amounts of input data. Unsurprisingly, at such a large scale, unexpected events
(e.g., failures of components, instability of the software, undesirable
learning patterns, etc.), are frequent and typically impact the training in a
negative fashion. Thus, LLMs need to be checkpointed frequently so that they
can be rolled back to a stable state and subsequently fine-tuned. However,
given the large sizes of LLMs, a straightforward checkpointing solution that
directly writes the model parameters and optimizer state to persistent storage
(e.g., a parallel file system), incurs significant I/O overheads. To address
this challenge, in this paper we study how to reduce the I/O overheads for
enabling fast and scalable checkpointing for LLMs that can be applied at high
frequency (up to the granularity of individual iterations) without significant
impact on the training process. Specifically, we introduce a lazy asynchronous
multi-level approach that takes advantage of the fact that the tensors making
up the model and optimizer state shards remain immutable for extended periods
of time, which makes it possible to copy their content in the background with
minimal interference during the training process. We evaluate our approach at
scales of up to 180 GPUs using different model sizes, parallelism settings, and
checkpointing frequencies. The results show up to 48