Leveraging LLM Reasoning Enhances Personalized Recommender Systems · The Large Language Model Bible Contribute to LLM-Bible

Leveraging LLM Reasoning Enhances Personalized Recommender Systems

Tsai Alicia Y., Kraft Adam, Jin Long, Cai Chenwei, Hosseini Anahita, Xu Taibai, Zhang Zemin, Hong Lichan, Chi Ed H., Yi Xinyang. Arxiv 2024

[Paper]    
Prompting RAG Reinforcement Learning Tools

Recent advancements have showcased the potential of Large Language Models (LLMs) in executing reasoning tasks, particularly facilitated by Chain-of-Thought (CoT) prompting. While tasks like arithmetic reasoning involve clear, definitive answers and logical chains of thought, the application of LLM reasoning in recommendation systems (RecSys) presents a distinct challenge. RecSys tasks revolve around subjectivity and personalized preferences, an under-explored domain in utilizing LLMs’ reasoning capabilities. Our study explores several aspects to better understand reasoning for RecSys and demonstrate how task quality improves by utilizing LLM reasoning in both zero-shot and finetuning settings. Additionally, we propose RecSAVER (Recommender Systems Automatic Verification and Evaluation of Reasoning) to automatically assess the quality of LLM reasoning responses without the requirement of curated gold references or human raters. We show that our framework aligns with real human judgment on the coherence and faithfulness of reasoning responses. Overall, our work shows that incorporating reasoning into RecSys can improve personalized tasks, paving the way for further advancements in recommender system methodologies.

Similar Work