Real-time Execution Of Large-scale Language Models On Mobile · The Large Language Model Bible Contribute to LLM-Bible

Real-time Execution Of Large-scale Language Models On Mobile

Niu Wei, Kong Zhenglun, Yuan Geng, Jiang Weiwen, Guan Jiexiong, Ding Caiwen, Zhao Pu, Liu Sijia, Ren Bin, Wang Yanzhi. Arxiv 2020

[Paper]    
BERT Efficiency And Optimization Model Architecture Pretraining Methods RAG Tools Transformer

Pre-trained large-scale language models have increasingly demonstrated high accuracy on many natural language processing (NLP) tasks. However, the limited weight storage and computational speed on hardware platforms have impeded the popularity of pre-trained models, especially in the era of edge computing. In this paper, we seek to find the best model structure of BERT for a given computation size to match specific devices. We propose the first compiler-aware neural architecture optimization framework. Our framework can guarantee the identified model to meet both resource and real-time specifications of mobile devices, thus achieving real-time execution of large transformer-based models like BERT variants. We evaluate our model on several NLP tasks, achieving competitive results on well-known benchmarks with lower latency on mobile devices. Specifically, our model is 5.2x faster on CPU and 4.1x faster on GPU with 0.5-2% accuracy loss compared with BERT-base. Our overall framework achieves up to 7.8x speedup compared with TensorFlow-Lite with only minor accuracy loss.

Similar Work