Evaluating Llms' Mathematical Reasoning In Financial Document Question Answering · The Large Language Model Bible Contribute to LLM-Bible

Evaluating Llms' Mathematical Reasoning In Financial Document Question Answering

Srivastava Pragya, Malik Manuj, Gupta Vivek, Ganu Tanuja, Roth Dan. Arxiv 2024

[Paper]    
Applications Prompting

Large Language Models (LLMs), excel in natural language understanding, but their capability for complex mathematical reasoning with an amalgamation of structured tables and unstructured text is uncertain. This study explores LLMs’ mathematical reasoning on four financial tabular question-answering datasets: TATQA, FinQA, ConvFinQA, and Multihiertt. Through extensive experiments with various models and prompting techniques, we assess how LLMs adapt to complex tables and mathematical tasks. We focus on sensitivity to table complexity and performance variations with an increasing number of arithmetic reasoning steps. The results provide insights into LLMs’ capabilities and limitations in handling complex mathematical scenarios for semi-structured tables. Ultimately, we introduce a novel prompting technique tailored to semi-structured documents, matching or outperforming other baselines in performance while providing a nuanced understanding of LLMs abilities for such a task.

Similar Work