A Study On Large Language Models' Limitations In Multiple-choice Question Answering · The Large Language Model Bible Contribute to LLM-Bible

A Study On Large Language Models' Limitations In Multiple-choice Question Answering

Khatun Aisha, Brown Daniel G.. Arxiv 2024

[Paper]    
Applications Reinforcement Learning

The widespread adoption of Large Language Models (LLMs) has become commonplace, particularly with the emergence of open-source models. More importantly, smaller models are well-suited for integration into consumer devices and are frequently employed either as standalone solutions or as subroutines in various AI tasks. Despite their ubiquitous use, there is no systematic analysis of their specific capabilities and limitations. In this study, we tackle one of the most widely used tasks - answering Multiple Choice Question (MCQ). We analyze 26 small open-source models and find that 65% of the models do not understand the task, only 4 models properly select an answer from the given choices, and only 5 of these models are choice order independent. These results are rather alarming given the extensive use of MCQ tests with these models. We recommend exercising caution and testing task understanding before using MCQ to evaluate LLMs in any field whatsoever.

Similar Work