Episodic Memory Reader: Learning What To Remember For Question Answering From Streaming Data · The Large Language Model Bible Contribute to LLM-Bible

Episodic Memory Reader: Learning What To Remember For Question Answering From Streaming Data

Han Moonsu, Kang Minki, Jung Hyunwoo, Hwang Sung Ju. Arxiv 2019

[Paper]    
Agentic Applications Model Architecture Pretraining Methods Reinforcement Learning Transformer

We consider a novel question answering (QA) task where the machine needs to read from large streaming data (long documents or videos) without knowing when the questions will be given, which is difficult to solve with existing QA methods due to their lack of scalability. To tackle this problem, we propose a novel end-to-end deep network model for reading comprehension, which we refer to as Episodic Memory Reader (EMR) that sequentially reads the input contexts into an external memory, while replacing memories that are less important for answering unseen questions. Specifically, we train an RL agent to replace a memory entry when the memory is full, in order to maximize its QA accuracy at a future timepoint, while encoding the external memory using either the GRU or the Transformer architecture to learn representations that considers relative importance between the memory entries. We validate our model on a synthetic dataset (bAbI) as well as real-world large-scale textual QA (TriviaQA) and video QA (TVQA) datasets, on which it achieves significant improvements over rule-based memory scheduling policies or an RL-based baseline that independently learns the query-specific importance of each memory.

Similar Work