TRAWL: Tensor Reduced And Approximated Weights For Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

TRAWL: Tensor Reduced And Approximated Weights For Large Language Models

Luo Yiran, Patel Het, Fu Yu, Ahn Dawon, Chen Jia, Dong Yue, Papalexakis Evangelos E.. Arxiv 2024

[Paper]    
Efficiency And Optimization Fine Tuning Model Architecture Pretraining Methods RAG Reinforcement Learning Training Techniques Transformer

Large language models (LLMs) have fundamentally transformed artificial intelligence, catalyzing recent advancements while imposing substantial environmental and computational burdens. We introduce TRAWL (Tensor Reduced and Approximated Weights for Large Language Models), a novel methodology for optimizing LLMs through tensor decomposition. TRAWL leverages diverse strategies to exploit matrices within transformer-based architectures, realizing notable performance enhancements without necessitating retraining. The most significant improvements were observed through a layer-by-layer intervention strategy, particularly when applied to fully connected weights of the final layers, yielding up to 16% enhancement in accuracy without the need for additional data or fine-tuning. These results underscore the importance of targeted and adaptive techniques in increasing the efficiency and effectiveness of large language model optimization, thereby promoting the development of more sustainable and accessible AI systems.

Similar Work