A Practice-friendly Two-stage Llm-enhanced Paradigm In Sequential Recommendation · The Large Language Model Bible Contribute to LLM-Bible

A Practice-friendly Two-stage Llm-enhanced Paradigm In Sequential Recommendation

Liu Dugang, Xian Shenxian, Lin Xiaolin, Zhang Xiaolian, Zhu Hong, Fang Yuan, Chen Zhen, Ming Zhong. Arxiv 2024

[Paper]    
Applications Efficiency And Optimization Fine Tuning Pretraining Methods Tools Training Techniques

The training paradigm integrating large language models (LLM) is gradually reshaping sequential recommender systems (SRS) and has shown promising results. However, most existing LLM-enhanced methods rely on rich textual information on the item side and instance-level supervised fine-tuning (SFT) to inject collaborative information into LLM, which is inefficient and limited in many applications. To alleviate these problems, this paper proposes a novel practice-friendly two-stage LLM-enhanced paradigm (TSLRec) for SRS. Specifically, in the information reconstruction stage, we design a new user-level SFT task for collaborative information injection with the assistance of a pre-trained SRS model, which is more efficient and compatible with limited text information. We aim to let LLM try to infer the latent category of each item and reconstruct the corresponding user’s preference distribution for all categories from the user’s interaction sequence. In the information augmentation stage, we feed each item into LLM to obtain a set of enhanced embeddings that combine collaborative information and LLM inference capabilities. These embeddings can then be used to help train various future SRS models. Finally, we verify the effectiveness and efficiency of our TSLRec on three SRS benchmark datasets.

Similar Work