SWIFT:A Scalable Lightweight Infrastructure For Fine-tuning · The Large Language Model Bible Contribute to LLM-Bible

SWIFT:A Scalable Lightweight Infrastructure For Fine-tuning

Zhao Yuze, Huang Jintao, Hu Jinghan, Wang Xingjun, Mao Yunlin, Zhang Daoze, Jiang Zeyinzi, Wu Zhikai, Ai Baole, Wang Ang, Zhou Wenmeng, Chen Yingda. Arxiv 2024

[Paper]    
Agentic Applications Attention Mechanism Efficiency And Optimization Fine Tuning Model Architecture Pretraining Methods Quantization RAG Reinforcement Learning Tools Training Techniques Transformer

Recent development in Large Language Models (LLMs) and Multi-modal Large Language Models (MLLMs) have leverage Attention-based Transformer architectures and achieved superior performance and generalization capabilities. They have since covered extensive areas of traditional learning tasks. For instance, text-based tasks such as text-classification and sequence-labeling, as well as multi-modal tasks like Visual Question Answering (VQA) and Optical Character Recognition (OCR), which were previously addressed using different models, can now be tackled based on one foundation model. Consequently, the training and lightweight fine-tuning of LLMs and MLLMs, especially those based on Transformer architecture, has become particularly important. In recognition of these overwhelming needs, we develop SWIFT, a customizable one-stop infrastructure for large models. With support of over \(300+\) LLMs and \(50+\) MLLMs, SWIFT stands as the open-source framework that provide the most comprehensive support for fine-tuning large models. In particular, it is the first training framework that provides systematic support for MLLMs. In addition to the core functionalities of fine-tuning, SWIFT also integrates post-training processes such as inference, evaluation, and model quantization, to facilitate fast adoptions of large models in various application scenarios. With a systematic integration of various training techniques, SWIFT offers helpful utilities such as benchmark comparisons among different training techniques for large models. For fine-tuning models specialized in agent framework, we show that notable improvements on the ToolBench leader-board can be achieved by training with customized dataset on SWIFT, with an increase of 5.2%-21.8% in the Act.EM metric over various baseline models, a reduction in hallucination by 1.6%-14.1%, and an average performance improvement of 8%-17%.

Similar Work