Towards Smaller, Faster Decoder-only Transformers: Architectural Variants And Their Implications · The Large Language Model Bible Contribute to LLM-Bible

Towards Smaller, Faster Decoder-only Transformers: Architectural Variants And Their Implications

Suresh Sathya Krishnan, P Shunmugapriya. Arxiv 2024

[Paper]    
Fine Tuning GPT Model Architecture Pretraining Methods Training Techniques Transformer

In recent times, the research on Large Language Models (LLMs) has grown exponentially, predominantly focusing on models underpinned by the transformer architecture, as established by [1], and further developed through the decoder-only variations by [2]. Contemporary efforts in this field primarily aim to enhance model capabilities by scaling up both the architecture and data volumes utilized during training. However, the exploration into reduce these model sizes while preserving their efficacy remains scant. In this study, we introduce three modifications to the decoder-only transformer architecture, namely ParallelGPT (pgpt), LinearGPT (lgpt), and ConvGPT (cgpt). These variants demonstrate comparable performance to the conventional architecture in language generation, yet benefit from reduced model sizes and faster training processes. We open-source the model weights and the complete codebase for these implementation for further research.

Similar Work