You Only Need One Model For Open-domain Question Answering · The Large Language Model Bible Contribute to LLM-Bible

You Only Need One Model For Open-domain Question Answering

Lee Haejun, Kedia Akhil, Lee Jongwon, Paranjape Ashwin, Manning Christopher D., Woo Kyoung-gu. Arxiv 2021

[Paper]    
Applications Attention Mechanism Model Architecture Pretraining Methods Training Techniques Transformer

Recent approaches to Open-domain Question Answering refer to an external knowledge base using a retriever model, optionally rerank passages with a separate reranker model and generate an answer using another reader model. Despite performing related tasks, the models have separate parameters and are weakly-coupled during training. We propose casting the retriever and the reranker as internal passage-wise attention mechanisms applied sequentially within the transformer architecture and feeding computed representations to the reader, with the hidden representations progressively refined at each stage. This allows us to use a single question answering model trained end-to-end, which is a more efficient use of model capacity and also leads to better gradient flow. We present a pre-training method to effectively train this architecture and evaluate our model on the Natural Questions and TriviaQA open datasets. For a fixed parameter budget, our model outperforms the previous state-of-the-art model by 1.0 and 0.7 exact match scores.

Similar Work