Multiprageval: Multilingual Pragmatic Evaluation Of Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Multiprageval: Multilingual Pragmatic Evaluation Of Large Language Models

Park Dojun, Lee Jiwoo, Park Seohyun, Jeong Hyeyun, Koo Youngeun, Hwang Soonha, Park Seonwoo, Lee Sungeun. Arxiv 2024

[Paper]    
RAG Uncategorized

As the capabilities of LLMs expand, it becomes increasingly important to evaluate them beyond basic knowledge assessment, focusing on higher-level language understanding. This study introduces MultiPragEval, a robust test suite designed for the multilingual pragmatic evaluation of LLMs across English, German, Korean, and Chinese. Comprising 1200 question units categorized according to Grice’s Cooperative Principle and its four conversational maxims, MultiPragEval enables an in-depth assessment of LLMs’ contextual awareness and their ability to infer implied meanings. Our findings demonstrate that Claude3-Opus significantly outperforms other models in all tested languages, establishing a state-of-the-art in the field. Among open-source models, Solar-10.7B and Qwen1.5-14B emerge as strong competitors. This study not only leads the way in the multilingual evaluation of LLMs in pragmatic inference but also provides valuable insights into the nuanced capabilities necessary for advanced language comprehension in AI systems.

Similar Work