Fastfid: Improve Inference Efficiency Of Open Domain Question Answering Via Sentence Selection · The Large Language Model Bible Contribute to LLM-Bible

Fastfid: Improve Inference Efficiency Of Open Domain Question Answering Via Sentence Selection

Huang Yufei, Han Xu, Sun Maosong. Arxiv 2024

[Paper] [Code]    
Applications Attention Mechanism Efficiency And Optimization Has Code Model Architecture Reinforcement Learning Tools

Open Domain Question Answering (ODQA) has been advancing rapidly in recent times, driven by significant developments in dense passage retrieval and pretrained language models. Current models typically incorporate the FiD framework, which is composed by a neural retriever alongside an encoder-decoder neural reader. In the answer generation process, the retriever will retrieve numerous passages (around 100 for instance), each of which is then individually encoded by the encoder. Subsequently, the decoder makes predictions based on these encoded passages. Nevertheless, this framework can be relatively time-consuming, particularly due to the extensive length of the gathered passages. To address this, we introduce FastFiD in this paper, a novel approach that executes sentence selection on the encoded passages. This aids in retaining valuable sentences while reducing the context length required for generating answers. Experiments on three commonly used datasets (Natural Questions, TriviaQA and ASQA) demonstrate that our method can enhance the inference speed by 2.3X-5.7X, while simultaneously maintaining the model’s performance. Moreover, an in-depth analysis of the model’s attention reveals that the selected sentences indeed hold a substantial contribution towards the final answer. The codes are publicly available at https://github.com/thunlp/FastFiD.

Similar Work