Ragsys: Item-cold-start Recommender As RAG System · The Large Language Model Bible Contribute to LLM-Bible

Ragsys: Item-cold-start Recommender As RAG System

Contal Emile, Mcgoldrick Garrin. Arxiv 2024

[Paper]    
Applications Ethics And Bias Few Shot Fine Tuning In Context Learning Pretraining Methods Prompting RAG Reinforcement Learning Training Techniques

Large Language Models (LLM) hold immense promise for real-world applications, but their generic knowledge often falls short of domain-specific needs. Fine-tuning, a common approach, can suffer from catastrophic forgetting and hinder generalizability. In-Context Learning (ICL) offers an alternative, which can leverage Retrieval-Augmented Generation (RAG) to provide LLMs with relevant demonstrations for few-shot learning tasks. This paper explores the desired qualities of a demonstration retrieval system for ICL. We argue that ICL retrieval in this context resembles item-cold-start recommender systems, prioritizing discovery and maximizing information gain over strict relevance. We propose a novel evaluation method that measures the LLM’s subsequent performance on NLP tasks, eliminating the need for subjective diversity scores. Our findings demonstrate the critical role of diversity and quality bias in retrieved demonstrations for effective ICL, and highlight the potential of recommender system techniques in this domain.

Similar Work