User-llm: Efficient LLM Contextualization With User Embeddings · The Large Language Model Bible Contribute to LLM-Bible

User-llm: Efficient LLM Contextualization With User Embeddings

Ning Lin, Liu Luyang, Wu Jiaxing, Wu Neo, Berlowitz Devora, Prakash Sushant, Green Bradley, O'banion Shawn, Xie Jun. Arxiv 2024

[Paper]    
Attention Mechanism Efficiency And Optimization Model Architecture Prompting RAG Reinforcement Learning Survey Paper Tools Training Techniques

Large language models (LLMs) have achieved remarkable success across various domains, but effectively incorporating complex and potentially noisy user timeline data into LLMs remains a challenge. Current approaches often involve translating user timelines into text descriptions before feeding them to LLMs, which can be inefficient and may not fully capture the nuances of user behavior. Inspired by how LLMs are effectively integrated with images through direct embeddings, we propose User-LLM, a novel framework that leverages user embeddings to directly contextualize LLMs with user history interactions. These embeddings, generated by a user encoder pretrained using self-supervised learning on diverse user interactions, capture latent user behaviors and interests as well as their evolution over time. We integrate these user embeddings with LLMs through cross-attention, enabling LLMs to dynamically adapt their responses based on the context of a user’s past actions and preferences. Our approach achieves significant efficiency gains by representing user timelines directly as embeddings, leading to substantial inference speedups of up to 78.1X. Comprehensive experiments on MovieLens, Amazon Review, and Google Local Review datasets demonstrate that User-LLM outperforms text-prompt-based contextualization on tasks requiring deep user understanding, with improvements of up to 16.33%, particularly excelling on long sequences that capture subtle shifts in user behavior. Furthermore, the incorporation of Perceiver layers streamlines the integration between user encoders and LLMs, yielding additional computational savings.

Similar Work