A Comprehensive Performance Study Of Large Language Models On Novel AI Accelerators · The Large Language Model Bible Contribute to LLM-Bible

A Comprehensive Performance Study Of Large Language Models On Novel AI Accelerators

Emani Murali, Foreman Sam, Sastry Varuni, Xie Zhen, Raskar Siddhisanket, Arnold William, Thakur Rajeev, Vishwanath Venkatram, Papka Michael E.. Arxiv 2023

[Paper]    
Applications GPT Model Architecture Pretraining Methods Reinforcement Learning Transformer

Artificial intelligence (AI) methods have become critical in scientific applications to help accelerate scientific discovery. Large language models (LLMs) are being considered as a promising approach to address some of the challenging problems because of their superior generalization capabilities across domains. The effectiveness of the models and the accuracy of the applications is contingent upon their efficient execution on the underlying hardware infrastructure. Specialized AI accelerator hardware systems have recently become available for accelerating AI applications. However, the comparative performance of these AI accelerators on large language models has not been previously studied. In this paper, we systematically study LLMs on multiple AI accelerators and GPUs and evaluate their performance characteristics for these models. We evaluate these systems with (i) a micro-benchmark using a core transformer block, (ii) a GPT- 2 model, and (iii) an LLM-driven science use case, GenSLM. We present our findings and analyses of the models’ performance to better understand the intrinsic capabilities of AI accelerators. Furthermore, our analysis takes into account key factors such as sequence lengths, scaling behavior, sparsity, and sensitivity to gradient accumulation steps.

Similar Work