Learning Positional Attention For Sequential Recommendation · The Large Language Model Bible Contribute to LLM-Bible

Learning Positional Attention For Sequential Recommendation

Luo Fan, Zhang Juan, Xu Shenghui. Arxiv 2024

[Paper] [Code]    
Attention Mechanism Has Code Model Architecture Survey Paper Transformer

Self-attention-based networks have achieved remarkable performance in sequential recommendation tasks. A crucial component of these models is positional encoding. In this study, we delve into the learned positional embedding, demonstrating that it often captures the distance between tokens. Building on this insight, we introduce novel attention models that directly learn positional relations. Extensive experiments reveal that our proposed models, \textbf{PARec} and \textbf{FPARec} outperform previous self-attention-based approaches.Our code is available at the link for anonymous review: https://anonymous.4open.science/ r/FPARec-2C55/

Similar Work