Revolutionizing Mobile Interaction: Enabling A 3 Billion Parameter GPT LLM On Mobile · The Large Language Model Bible Contribute to LLM-Bible

Revolutionizing Mobile Interaction: Enabling A 3 Billion Parameter GPT LLM On Mobile

Carreira Samuel, Marques Tomás, Ribeiro José, Grilo Carlos. Arxiv 2023

[Paper]    
Efficiency And Optimization GPT Model Architecture Pretraining Methods Quantization Training Techniques Transformer

The field of Artificial Intelligence has witnessed remarkable progress in recent years, especially with the emergence of powerful large language models (LLMs) based on the transformer architecture. Cloud-based LLMs, such as OpenAI’s ChatGPT, offer impressive capabilities but come with concerns regarding latency and privacy due to network dependencies. This article presents an innovative approach to LLM inference, envisioning a future where LLMs with billions of parameters can be executed directly on mobile devices without network connectivity. The article showcases a fine-tuned GPT LLM with 3 billion parameters that can operate smoothly on devices with as low as 4GB of memory. Through the integration of native code and model quantization techniques, the application not only serves as a general-purpose assistant but also facilitates seamless mobile interactions with text-to-actions features. The article provides insights into the training pipeline, implementation details, test results, and future directions of on-device LLM inference. This breakthrough technology opens up possibilities for empowering users with sophisticated AI capabilities while preserving their privacy and eliminating latency concerns.

Similar Work