Revisiting Automated Prompting: Are We Actually Doing Better? · The Large Language Model Bible Contribute to LLM-Bible

Revisiting Automated Prompting: Are We Actually Doing Better?

Zhou Yulin, Zhao Yiren, Shumailov Ilia, Mullins Robert, Gal Yarin. Arxiv 2023

[Paper]    
Few Shot Fine Tuning Pretraining Methods Prompting Training Techniques

Current literature demonstrates that Large Language Models (LLMs) are great few-shot learners, and prompting significantly increases their performance on a range of downstream tasks in a few-shot learning setting. An attempt to automate human-led prompting followed, with some progress achieved. In particular, subsequent work demonstrates automation can outperform fine-tuning in certain K-shot learning scenarios. In this paper, we revisit techniques for automated prompting on six different downstream tasks and a larger range of K-shot learning settings. We find that automated prompting does not consistently outperform simple manual prompts. Our work suggests that, in addition to fine-tuning, manual prompts should be used as a baseline in this line of research.

Similar Work