QASE Enhanced Plms: Improved Control In Text Generation For MRC · The Large Language Model Bible Contribute to LLM-Bible

QASE Enhanced Plms: Improved Control In Text Generation For MRC

Ai Lin, Hui Zheng, Liu Zizhou, Hirschberg Julia. Arxiv 2024

[Paper]    
Applications Fine Tuning GPT Language Modeling Model Architecture Pretraining Methods Training Techniques

To address the challenges of out-of-control generation in generative models for machine reading comprehension (MRC), we introduce the Question-Attended Span Extraction (QASE) module. Integrated during the fine-tuning of pre-trained generative language models (PLMs), QASE enables these PLMs to match SOTA extractive methods and outperform leading LLMs like GPT-4 in MRC tasks, without significant increases in computational costs.

Similar Work