End-to-end Answer Chunk Extraction And Ranking For Reading Comprehension · The Large Language Model Bible Contribute to LLM-Bible

End-to-end Answer Chunk Extraction And Ranking For Reading Comprehension

Yu Yang, Zhang Wei, Hasan Kazi, Yu Mo, Xiang Bing, Zhou Bowen. Arxiv 2016

[Paper]    
Attention Mechanism Model Architecture Transformer

This paper proposes dynamic chunk reader (DCR), an end-to-end neural reading comprehension (RC) model that is able to extract and rank a set of answer candidates from a given document to answer questions. DCR is able to predict answers of variable lengths, whereas previous neural RC models primarily focused on predicting single tokens or entities. DCR encodes a document and an input question with recurrent neural networks, and then applies a word-by-word attention mechanism to acquire question-aware representations for the document, followed by the generation of chunk representations and a ranking module to propose the top-ranked chunk as the answer. Experimental results show that DCR achieves state-of-the-art exact match and F1 scores on the SQuAD dataset.

Similar Work