Qrelscore: Better Evaluating Generated Questions With Deeper Understanding Of Context-aware Relevance · The Large Language Model Bible Contribute to LLM-Bible

Qrelscore: Better Evaluating Generated Questions With Deeper Understanding Of Context-aware Relevance

Wang Xiaoqiang, Liu Bang, Tang Siliang, Wu Lingfei. Arxiv 2022

[Paper]    
BERT GPT Model Architecture Prompting Reinforcement Learning Security

Existing metrics for assessing question generation not only require costly human reference but also fail to take into account the input context of generation, rendering the lack of deep understanding of the relevance between the generated questions and input contexts. As a result, they may wrongly penalize a legitimate and reasonable candidate question when it (i) involves complicated reasoning with the context or (ii) can be grounded by multiple evidences in the context. In this paper, we propose \(\textbf{QRelScore}\), a context-aware \(\underline{\textbf{Rel}}\)evance evaluation metric for \(\underline{\textbf{Q}}\)uestion Generation. Based on off-the-shelf language models such as BERT and GPT2, QRelScore employs both word-level hierarchical matching and sentence-level prompt-based generation to cope with the complicated reasoning and diverse generation from multiple evidences, respectively. Compared with existing metrics, our experiments demonstrate that QRelScore is able to achieve a higher correlation with human judgments while being much more robust to adversarial samples.

Similar Work