THEANINE: Revisiting Memory Management In Long-term Conversations With Timeline-augmented Response Generation · The Large Language Model Bible Contribute to LLM-Bible

THEANINE: Revisiting Memory Management In Long-term Conversations With Timeline-augmented Response Generation

Kim Seo Hyun, Ong Kai Tzu-iunn, Kwon Taeyoon, Kim Namyoung, Ka Keummin, Bae Seonghyeon, Jo Yohan, Hwang Seung-won, Lee Dongha, Yeo Jinyoung. Arxiv 2024

[Paper]    
Applications Reinforcement Learning Tools

Large language models (LLMs) are capable of processing lengthy dialogue histories during prolonged interaction with users without additional memory modules; however, their responses tend to overlook or incorrectly recall information from the past. In this paper, we revisit memory-augmented response generation in the era of LLMs. While prior work focuses on getting rid of outdated memories, we argue that such memories can provide contextual cues that help dialogue systems understand the development of past events and, therefore, benefit response generation. We present Theanine, a framework that augments LLMs’ response generation with memory timelines – series of memories that demonstrate the development and causality of relevant past events. Along with Theanine, we introduce TeaFarm, a counterfactual-driven question-answering pipeline addressing the limitation of G-Eval in long-term conversations. Supplementary videos of our methods and the TeaBag dataset for TeaFarm evaluation are in https://theanine-693b0.web.app/.

Similar Work