Focus On The Core: Efficient Attention Via Pruned Token Compression For Document Classification · The Large Language Model Bible Contribute to LLM-Bible

Focus On The Core: Efficient Attention Via Pruned Token Compression For Document Classification

Yun Jungmin, Kim Mihyeon, Kim Youngbin. Arxiv 2024

[Paper]    
Attention Mechanism BERT Efficiency And Optimization Model Architecture Pretraining Methods Pruning Transformer

Transformer-based models have achieved dominant performance in numerous NLP tasks. Despite their remarkable successes, pre-trained transformers such as BERT suffer from a computationally expensive self-attention mechanism that interacts with all tokens, including the ones unfavorable to classification performance. To overcome these challenges, we propose integrating two strategies: token pruning and token combining. Token pruning eliminates less important tokens in the attention mechanism’s key and value as they pass through the layers. Additionally, we adopt fuzzy logic to handle uncertainty and alleviate potential mispruning risks arising from an imbalanced distribution of each token’s importance. Token combining, on the other hand, condenses input sequences into smaller sizes in order to further compress the model. By integrating these two approaches, we not only improve the model’s performance but also reduce its computational demands. Experiments with various datasets demonstrate superior performance compared to baseline models, especially with the best improvement over the existing BERT model, achieving +5%p in accuracy and +5.6%p in F1 score. Additionally, memory cost is reduced to 0.61x, and a speedup of 1.64x is achieved.

Similar Work