Evaluation Of Instruction-following Ability For Large Language Models On Story-ending Generation · The Large Language Model Bible Contribute to LLM-Bible

Evaluation Of Instruction-following Ability For Large Language Models On Story-ending Generation

Hida Rem, Ohmura Junki, Sekiya Toshiyuki. Arxiv 2024

[Paper]    
GPT Model Architecture Uncategorized

Instruction-tuned Large Language Models (LLMs) have achieved remarkable performance across various benchmark tasks. While providing instructions to LLMs for guiding their generations is user-friendly, assessing their instruction-following capabilities is still unclarified due to a lack of evaluation metrics. In this paper, we focus on evaluating the instruction-following ability of LLMs in the context of story-ending generation, which requires diverse and context-specific instructions. We propose an automatic evaluation pipeline that utilizes a machine reading comprehension (MRC) model to determine whether the generated story-ending reflects instruction. Our findings demonstrate that our proposed metric aligns with human evaluation. Furthermore, our experiments confirm that recent open-source LLMs can achieve instruction-following performance close to GPT-3.5, as assessed through automatic evaluation.

Similar Work