Revisiting Prompt Engineering Via Declarative Crowdsourcing · The Large Language Model Bible Contribute to LLM-Bible

Revisiting Prompt Engineering Via Declarative Crowdsourcing

Parameswaran Aditya G., Shankar Shreya, Asawa Parth, Jain Naman, Wang Yujie. Arxiv 2023

[Paper]    
Prompting RAG

Large language models (LLMs) are incredibly powerful at comprehending and generating data in the form of text, but are brittle and error-prone. There has been an advent of toolkits and recipes centered around so-called prompt engineering-the process of asking an LLM to do something via a series of prompts. However, for LLM-powered data processing workflows, in particular, optimizing for quality, while keeping cost bounded, is a tedious, manual process. We put forth a vision for declarative prompt engineering. We view LLMs like crowd workers and leverage ideas from the declarative crowdsourcing literature-including leveraging multiple prompting strategies, ensuring internal consistency, and exploring hybrid-LLM-non-LLM approaches-to make prompt engineering a more principled process. Preliminary case studies on sorting, entity resolution, and imputation demonstrate the promise of our approach

Similar Work