Rambla: A Framework For Evaluating The Reliability Of Llms As Assistants In The Biomedical Domain · The Large Language Model Bible Contribute to LLM-Bible

Rambla: A Framework For Evaluating The Reliability Of Llms As Assistants In The Biomedical Domain

Bolton William James, Poyiadzi Rafael, Morrell Edward R., Bueno Gabriela Van Bergen Gonzalez, Goetz Lea. Arxiv 2024

[Paper]    
Applications Prompting Reinforcement Learning Security Tools

Large Language Models (LLMs) increasingly support applications in a wide range of domains, some with potential high societal impact such as biomedicine, yet their reliability in realistic use cases is under-researched. In this work we introduce the Reliability AssesMent for Biomedical LLM Assistants (RAmBLA) framework and evaluate whether four state-of-the-art foundation LLMs can serve as reliable assistants in the biomedical domain. We identify prompt robustness, high recall, and a lack of hallucinations as necessary criteria for this use case. We design shortform tasks and tasks requiring LLM freeform responses mimicking real-world user interactions. We evaluate LLM performance using semantic similarity with a ground truth response, through an evaluator LLM.

Similar Work