Ra-rec: An Efficient ID Representation Alignment Framework For Llm-based Recommendation · The Large Language Model Bible Contribute to LLM-Bible

Ra-rec: An Efficient ID Representation Alignment Framework For Llm-based Recommendation

Yu Xiaohan, Zhang Li, Zhao Xin, Wang Yue, Ma Zhongrui. Arxiv 2024

[Paper]    
Model Architecture Prompting Tools Training Techniques

Large language models (LLM) have recently emerged as a powerful tool for a variety of natural language processing tasks, bringing a new surge of combining LLM with recommendation systems, termed as LLM-based RS. Current approaches generally fall into two main paradigms, the ID direct usage paradigm and the ID translation paradigm, noting their core weakness stems from lacking recommendation knowledge and uniqueness. To address this limitation, we propose a new paradigm, ID representation, which incorporates pre-trained ID embeddings into LLMs in a complementary manner. In this work, we present RA-Rec, an efficient ID representation alignment framework for LLM-based recommendation, which is compatible with multiple ID-based methods and LLM architectures. Specifically, we treat ID embeddings as soft prompts and design an innovative alignment module and an efficient tuning method with tailored data construction for alignment. Extensive experiments demonstrate RA-Rec substantially outperforms current state-of-the-art methods, achieving up to 3.0% absolute HitRate@100 improvements while utilizing less than 10x training data.

Similar Work