A Transformer With Stack Attention · The Large Language Model Bible Contribute to LLM-Bible

A Transformer With Stack Attention

Li Jiaoda, White Jennifer C., Sachan Mrinmaya, Cotterell Ryan. Arxiv 2024

[Paper]    
Attention Mechanism Interpretability And Explainability Model Architecture Pretraining Methods Transformer

Natural languages are believed to be (mildly) context-sensitive. Despite underpinning remarkably capable large language models, transformers are unable to model many context-free language tasks. In an attempt to address this limitation in the modeling power of transformer-based language models, we propose augmenting them with a differentiable, stack-based attention mechanism. Our stack-based attention mechanism can be incorporated into any transformer-based language model and adds a level of interpretability to the model. We show that the addition of our stack-based attention mechanism enables the transformer to model some, but not all, deterministic context-free languages.

Similar Work