Bp-transformer: Modelling Long-range Context Via Binary Partitioning · The Large Language Model Bible Contribute to LLM-Bible

Bp-transformer: Modelling Long-range Context Via Binary Partitioning

Ye Zihao, Guo Qipeng, Gan Quan, Qiu Xipeng, Zhang Zheng. Arxiv 2019

[Paper]    
Applications Attention Mechanism Language Modeling Model Architecture Pretraining Methods Transformer

The Transformer model is widely successful on many natural language processing tasks. However, the quadratic complexity of self-attention limit its application on long text. In this paper, adopting a fine-to-coarse attention mechanism on multi-scale spans via binary partitioning (BP), we propose BP-Transformer (BPT for short). BPT yields \(O(k\cdot nlog (n/k))\) connections where \(k\) is a hyperparameter to control the density of attention. BPT has a good balance between computation complexity and model capacity. A series of experiments on text classification, machine translation and language modeling shows BPT has a superior performance for long text than previous self-attention models. Our code, hyperparameters and CUDA kernels for sparse attention are available in PyTorch.

Similar Work