Rdrec: Rationale Distillation For Llm-based Recommendation · The Large Language Model Bible Contribute to LLM-Bible

Rdrec: Rationale Distillation For Llm-based Recommendation

Wang Xinfeng, Cui Jin, Suzuki Yoshimi, Fukumoto Fumiyo. Arxiv 2024

[Paper] [Code]    
Attention Mechanism Distillation Efficiency And Optimization Has Code Model Architecture Prompting RAG Reinforcement Learning

Large language model (LLM)-based recommender models that bridge users and items through textual prompts for effective semantic reasoning have gained considerable attention. However, few methods consider the underlying rationales behind interactions, such as user preferences and item attributes, limiting the reasoning capability of LLMs for recommendations. This paper proposes a rationale distillation recommender (RDRec), a compact model designed to learn rationales generated by a larger language model (LM). By leveraging rationales from reviews related to users and items, RDRec remarkably specifies their profiles for recommendations. Experiments show that RDRec achieves state-of-the-art (SOTA) performance in both top-N and sequential recommendations. Our source code is released at https://github.com/WangXFng/RDRec.

Similar Work