The Benefits Of A Concise Chain Of Thought On Problem-solving In Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

The Benefits Of A Concise Chain Of Thought On Problem-solving In Large Language Models

Renze Matthew, Guven Erhan. Arxiv 2024

[Paper]    
GPT Model Architecture Prompting RAG

In this paper, we introduce Concise Chain-of-Thought (CCoT) prompting. We compared standard CoT and CCoT prompts to see how conciseness impacts response length and correct-answer accuracy. We evaluated this using GPT-3.5 and GPT-4 with a multiple-choice question-and-answer (MCQA) benchmark. CCoT reduced average response length by 48.70% for both GPT-3.5 and GPT-4 while having a negligible impact on problem-solving performance. However, on math problems, GPT-3.5 with CCoT incurs a performance penalty of 27.69%. Overall, CCoT leads to an average per-token cost reduction of 22.67%.

Similar Work