Efficient Prompting Methods For Large Language Models: A Survey · The Large Language Model Bible Contribute to LLM-Bible

Efficient Prompting Methods For Large Language Models: A Survey

Chang Kaiyan, Xu Songcheng, Wang Chenglong, Luo Yingfeng, Xiao Tong, Zhu Jingbo. Arxiv 2024

[Paper]    
Efficiency And Optimization In Context Learning Prompting Reinforcement Learning Survey Paper

Prompting has become a mainstream paradigm for adapting large language models (LLMs) to specific natural language processing tasks. While this approach opens the door to in-context learning of LLMs, it brings the additional computational burden of model inference and human effort of manual-designed prompts, particularly when using lengthy and complex prompts to guide and control the behavior of LLMs. As a result, the LLM field has seen a remarkable surge in efficient prompting methods. In this paper, we present a comprehensive overview of these methods. At a high level, efficient prompting methods can broadly be categorized into two approaches: prompting with efficient computation and prompting with efficient design. The former involves various ways of compressing prompts, and the latter employs techniques for automatic prompt optimization. We present the basic concepts of prompting, review the advances for efficient prompting, and highlight future research directions.

Similar Work