Enhancing Collaborative Semantics Of Language Model-driven Recommendations Via Graph-aware Learning · The Large Language Model Bible Contribute to LLM-Bible

Enhancing Collaborative Semantics Of Language Model-driven Recommendations Via Graph-aware Learning

Guan Zhong, Wu Likang, Zhao Hongke, He Ming, Fan Jianpin. Arxiv 2024

[Paper]    
Ethics And Bias Fine Tuning In Context Learning Pretraining Methods Prompting Reinforcement Learning Training Techniques

Large Language Models (LLMs) are increasingly prominent in the recommendation systems domain. Existing studies usually utilize in-context learning or supervised fine-tuning on task-specific data to align LLMs into recommendations. However, the substantial bias in semantic spaces between language processing tasks and recommendation tasks poses a nonnegligible challenge. Specifically, without the adequate capturing ability of collaborative information, existing modeling paradigms struggle to capture behavior patterns within community groups, leading to LLMs’ ineffectiveness in discerning implicit interaction semantic in recommendation scenarios. To address this, we consider enhancing the learning capability of language model-driven recommendation models for structured data, specifically by utilizing interaction graphs rich in collaborative semantics. We propose a Graph-Aware Learning for Language Model-Driven Recommendations (GAL-Rec). GAL-Rec enhances the understanding of user-item collaborative semantics by imitating the intent of Graph Neural Networks (GNNs) to aggregate multi-hop information, thereby fully exploiting the substantial learning capacity of LLMs to independently address the complex graphs in the recommendation system. Sufficient experimental results on three real-world datasets demonstrate that GAL-Rec significantly enhances the comprehension of collaborative semantics, and improves recommendation performance.

Similar Work