Uncertainty Guided Global Memory Improves Multi-hop Question Answering · The Large Language Model Bible Contribute to LLM-Bible

Uncertainty Guided Global Memory Improves Multi-hop Question Answering

Sagirova Alsu, Burtsev Mikhail. Arxiv 2023

[Paper]    
Applications Attention Mechanism Fine Tuning Model Architecture Pretraining Methods Reinforcement Learning Training Techniques Transformer

Transformers have become the gold standard for many natural language processing tasks and, in particular, for multi-hop question answering (MHQA). This task includes processing a long document and reasoning over the multiple parts of it. The landscape of MHQA approaches can be classified into two primary categories. The first group focuses on extracting supporting evidence, thereby constraining the QA model’s context to predicted facts. Conversely, the second group relies on the attention mechanism of the long input encoding model to facilitate multi-hop reasoning. However, attention-based token representations lack explicit global contextual information to connect reasoning steps. To address these issues, we propose GEMFormer, a two-stage method that first collects relevant information over the entire document to the memory and then combines it with local context to solve the task. Our experimental results show that fine-tuning a pre-trained model with memory-augmented input, including the most certain global elements, improves the model’s performance on three MHQA datasets compared to the baseline. We also found that the global explicit memory contains information from supporting facts required for the correct answer.

Similar Work