Controlrec: Bridging The Semantic Gap Between Language Model And Personalized Recommendation · The Large Language Model Bible Contribute to LLM-Bible

Controlrec: Bridging The Semantic Gap Between Language Model And Personalized Recommendation

Qiu Junyan, Wang Haitao, Hong Zhaolin, Yang Yiping, Liu Qiang, Wang Xingxing. Arxiv 2023

[Paper]    
Merging Pretraining Methods Prompting Reinforcement Learning Tools

The successful integration of large language models (LLMs) into recommendation systems has proven to be a major breakthrough in recent studies, paving the way for more generic and transferable recommendations. However, LLMs struggle to effectively utilize user and item IDs, which are crucial identifiers for successful recommendations. This is mainly due to their distinct representation in a semantic space that is different from the natural language (NL) typically used to train LLMs. To tackle such issue, we introduce ControlRec, an innovative Contrastive prompt learning framework for Recommendation systems. ControlRec treats user IDs and NL as heterogeneous features and encodes them individually. To promote greater alignment and integration between them in the semantic space, we have devised two auxiliary contrastive objectives: (1) Heterogeneous Feature Matching (HFM) aligning item description with the corresponding ID or user’s next preferred ID based on their interaction sequence, and (2) Instruction Contrastive Learning (ICL) effectively merging these two crucial data sources by contrasting probability distributions of output sequences generated by diverse tasks. Experimental results on four public real-world datasets demonstrate the effectiveness of the proposed method on improving model performance.

Similar Work