On The Evaluation Of Answer-agnostic Paragraph-level Multi-question Generation · The Large Language Model Bible Contribute to LLM-Bible

On The Evaluation Of Answer-agnostic Paragraph-level Multi-question Generation

Chowdhury Jishnu Ray, Mahata Debanjan, Caragea Cornelia. Arxiv 2022

[Paper]    
RAG Reinforcement Learning

We study the task of predicting a set of salient questions from a given paragraph without any prior knowledge of the precise answer. We make two main contributions. First, we propose a new method to evaluate a set of predicted questions against the set of references by using the Hungarian algorithm to assign predicted questions to references before scoring the assigned pairs. We show that our proposed evaluation strategy has better theoretical and practical properties compared to prior methods because it can properly account for the coverage of references. Second, we compare different strategies to utilize a pre-trained seq2seq model to generate and select a set of questions related to a given paragraph. The code is available.

Similar Work