Towards Better Chain-of-thought Prompting Strategies: A Survey · The Large Language Model Bible Contribute to LLM-Bible

Towards Better Chain-of-thought Prompting Strategies: A Survey

Yu Zihan, He Liang, Wu Zhen, Dai Xinyu, Chen Jiajun. Arxiv 2023

[Paper]    
Applications Merging Prompting Survey Paper Uncategorized

Chain-of-Thought (CoT), a step-wise and coherent reasoning chain, shows its impressive strength when used as a prompting strategy for large language models (LLM). Recent years, the prominent effect of CoT prompting has attracted emerging research. However, there still lacks of a systematic summary about key factors of CoT prompting and comprehensive guide for prompts utilizing. For a deeper understanding about CoT prompting, we survey on a wide range of current research, presenting a systematic and comprehensive analysis on several factors that may influence the effect of CoT prompting, and introduce how to better apply it in different applications under these discussions. We further analyze the challenges and propose some future directions about CoT prompting. This survey could provide an overall reference on related research.

Similar Work