A Course Shared Task On Evaluating LLM Output For Clinical Questions · The Large Language Model Bible Contribute to LLM-Bible

A Course Shared Task On Evaluating LLM Output For Clinical Questions

Hou Yufang, Tran Thy Thy, Vu Doan Nam Long, Cao Yiwen, Li Kai, Rohde Lukas, Gurevych Iryna. Arxiv 2024

[Paper]    
Uncategorized

This paper presents a shared task that we organized at the Foundations of Language Technology (FoLT) course in 2023/2024 at the Technical University of Darmstadt, which focuses on evaluating the output of Large Language Models (LLMs) in generating harmful answers to health-related clinical questions. We describe the task design considerations and report the feedback we received from the students. We expect the task and the findings reported in this paper to be relevant for instructors teaching natural language processing (NLP) and designing course assignments.

Similar Work