Prompting Large Language Models For Recommender Systems: A Comprehensive Framework And Empirical Analysis · The Large Language Model Bible Contribute to LLM-Bible

Prompting Large Language Models For Recommender Systems: A Comprehensive Framework And Empirical Analysis

Xu Lanling, Zhang Junjie, Li Bingqian, Wang Jinpeng, Cai Mingchen, Zhao Wayne Xin, Wen Ji-rong. Arxiv 2024

[Paper]    
Applications GPT Model Architecture Prompting Tools

Recently, large language models such as ChatGPT have showcased remarkable abilities in solving general tasks, demonstrating the potential for applications in recommender systems. To assess how effectively LLMs can be used in recommendation tasks, our study primarily focuses on employing LLMs as recommender systems through prompting engineering. We propose a general framework for utilizing LLMs in recommendation tasks, focusing on the capabilities of LLMs as recommenders. To conduct our analysis, we formalize the input of LLMs for recommendation into natural language prompts with two key aspects, and explain how our framework can be generalized to various recommendation scenarios. As for the use of LLMs as recommenders, we analyze the impact of public availability, tuning strategies, model architecture, parameter scale, and context length on recommendation results based on the classification of LLMs. As for prompt engineering, we further analyze the impact of four important components of prompts, \ie task descriptions, user interest modeling, candidate items construction and prompting strategies. In each section, we first define and categorize concepts in line with the existing literature. Then, we propose inspiring research questions followed by experiments to systematically analyze the impact of different factors on two public datasets. Finally, we summarize promising directions to shed lights on future research.

Similar Work