Optimization Methods For Personalizing Large Language Models Through Retrieval Augmentation · The Large Language Model Bible Contribute to LLM-Bible

Optimization Methods For Personalizing Large Language Models Through Retrieval Augmentation

Salemi Alireza, Kallumadi Surya, Zamani Hamed. Arxiv 2024

[Paper]    
Applications Distillation Efficiency And Optimization RAG Reinforcement Learning

This paper studies retrieval-augmented approaches for personalizing large language models (LLMs), which potentially have a substantial impact on various applications and domains. We propose the first attempt to optimize the retrieval models that deliver a limited number of personal documents to large language models for the purpose of personalized generation. We develop two optimization algorithms that solicit feedback from the downstream personalized generation tasks for retrieval optimization – one based on reinforcement learning whose reward function is defined using any arbitrary metric for personalized generation and another based on knowledge distillation from the downstream LLM to the retrieval model. This paper also introduces a pre- and post-generation retriever selection model that decides what retriever to choose for each LLM input. Extensive experiments on diverse tasks from the language model personalization (LaMP) benchmark reveal statistically significant improvements in six out of seven datasets.

Similar Work