Take A Step Back: Evoking Reasoning Via Abstraction In Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Take A Step Back: Evoking Reasoning Via Abstraction In Large Language Models

Zheng Huaixiu Steven, Mishra Swaroop, Chen Xinyun, Cheng Heng-tze, Chi Ed H., Le Quoc V, Zhou Denny. Arxiv 2023

[Paper]    
GPT Model Architecture Prompting

We present Step-Back Prompting, a simple prompting technique that enables LLMs to do abstractions to derive high-level concepts and first principles from instances containing specific details. Using the concepts and principles to guide reasoning, LLMs significantly improve their abilities in following a correct reasoning path towards the solution. We conduct experiments of Step-Back Prompting with PaLM-2L, GPT-4 and Llama2-70B models, and observe substantial performance gains on various challenging reasoning-intensive tasks including STEM, Knowledge QA, and Multi-Hop Reasoning. For instance, Step-Back Prompting improves PaLM-2L performance on MMLU (Physics and Chemistry) by 7% and 11% respectively, TimeQA by 27%, and MuSiQue by 7%.

Similar Work