Assessing The Impact Of Prompting Methods On Chatgpt's Mathematical Capabilities · The Large Language Model Bible Contribute to LLM-Bible

Assessing The Impact Of Prompting Methods On Chatgpt's Mathematical Capabilities

Chen Yuhao, Wong Chloe, Yang Hanwen, Aguenza Juan, Bhujangari Sai, Vu Benthan, Lei Xun, Prasad Amisha, Fluss Manny, Phuong Eric, Liu Minghao, Kumar Raja, Vats Vanshika, Davis James. Arxiv 2023

[Paper]    
GPT Model Architecture Prompting

This study critically evaluates the efficacy of prompting methods in enhancing the mathematical reasoning capability of large language models (LLMs). The investigation uses three prescriptive prompting methods - simple, persona, and conversational prompting - known for their effectiveness in enhancing the linguistic tasks of LLMs. We conduct this analysis on OpenAI’s LLM chatbot, ChatGPT-3.5, on extensive problem sets from the MATH, GSM8K, and MMLU datasets, encompassing a broad spectrum of mathematical challenges. A grading script adapted to each dataset is used to determine the effectiveness of these prompting interventions in enhancing the model’s mathematical analysis power. Contrary to expectations, our empirical analysis reveals that none of the investigated methods consistently improves over ChatGPT-3.5’s baseline performance, with some causing significant degradation. Our findings suggest that prompting strategies do not necessarily generalize to new domains, in this study failing to enhance mathematical performance.

Similar Work