Evaluating Llms' Mathematical And Coding Competency Through Ontology-guided Interventions · The Large Language Model Bible Contribute to LLM-Bible

Evaluating Llms' Mathematical And Coding Competency Through Ontology-guided Interventions

Hong Pengfei, Majumder Navonil, Ghosal Deepanway, Aditya Somak, Mihalcea Rada, Poria Soujanya. Arxiv 2024

[Paper] [Code]    
Has Code Reinforcement Learning Security

Recent advancements in Large Language Models (LLMs) have showcased striking results on existing logical reasoning benchmarks, with some models even surpassing human performance. However, the true depth of their competencies and robustness in reasoning tasks remains an open question. To this end, in this paper, we focus on two popular reasoning tasks: arithmetic reasoning and code generation. Particularly, we introduce: (i) a general ontology of perturbations for maths and coding questions, (ii) a semi-automatic method to apply these perturbations, and (iii) two datasets, MORE and CORE, respectively, of perturbed maths and coding problems to probe the limits of LLM capabilities in numeric reasoning and coding tasks. Through comprehensive evaluations of both closed-source and open-source LLMs, we show a significant performance drop across all the models against the perturbed questions, suggesting that the current LLMs lack robust problem solving skills and structured reasoning abilities in many areas, as defined by our ontology. We open source the datasets and source codes at: https://github.com/declare-lab/llm_robustness.

Similar Work