Personalized Query Rewriting In Conversational AI Agents · The Large Language Model Bible Contribute to LLM-Bible

Personalized Query Rewriting In Conversational AI Agents

Roshan-ghias Alireza, Mathialagan Clint Solomon, Ponnusamy Pragaash, Mathias Lambert, Guo Chenlei. Arxiv 2020

[Paper]    
Agentic Applications Attention Mechanism Model Architecture RAG Reinforcement Learning

Spoken language understanding (SLU) systems in conversational AI agents often experience errors in the form of misrecognitions by automatic speech recognition (ASR) or semantic gaps in natural language understanding (NLU). These errors easily translate to user frustrations, particularly so in recurrent events e.g. regularly toggling an appliance, calling a frequent contact, etc. In this work, we propose a query rewriting approach by leveraging users’ historically successful interactions as a form of memory. We present a neural retrieval model and a pointer-generator network with hierarchical attention and show that they perform significantly better at the query rewriting task with the aforementioned user memories than without. We also highlight how our approach with the proposed models leverages the structural and semantic diversity in ASR’s output towards recovering users’ intents.

Similar Work