Lite Unified Modeling For Discriminative Reading Comprehension · The Large Language Model Bible Contribute to LLM-Bible

Lite Unified Modeling For Discriminative Reading Comprehension

Zhao Yilin, Zhao Hai, Shen Libin, Zhao Yinggong. Arxiv 2022

[Paper] [Code]    
Attention Mechanism Has Code Model Architecture Reinforcement Learning

As a broad and major category in machine reading comprehension (MRC), the generalized goal of discriminative MRC is answer prediction from the given materials. However, the focuses of various discriminative MRC tasks may be diverse enough: multi-choice MRC requires model to highlight and integrate all potential critical evidence globally; while extractive MRC focuses on higher local boundary preciseness for answer extraction. Among previous works, there lacks a unified design with pertinence for the overall discriminative MRC tasks. To fill in above gap, we propose a lightweight POS-Enhanced Iterative Co-Attention Network (POI-Net) as the first attempt of unified modeling with pertinence, to handle diverse discriminative MRC tasks synchronously. Nearly without introducing more parameters, our lite unified design brings model significant improvement with both encoder and decoder components. The evaluation results on four discriminative MRC benchmarks consistently indicate the general effectiveness and applicability of our model, and the code is available at https://github.com/Yilin1111/poi-net.

Similar Work