A Note On Lora · The Large Language Model Bible Contribute to LLM-Bible

A Note On Lora

Fomenko Vlad, Yu Han, Lee Jongho, Hsieh Stanley, Chen Weizhu. Arxiv 2024

[Paper]    
Fine Tuning

LoRA (Low-Rank Adaptation) has emerged as a preferred method for efficiently adapting Large Language Models (LLMs) with remarkable simplicity and efficacy. This note extends the original LoRA paper by offering new perspectives that were not initially discussed and presents a series of insights for deploying LoRA at scale. Without introducing new experiments, we aim to improve the understanding and application of LoRA.

Similar Work