GRAD-SUM: Leveraging Gradient Summarization For Optimal Prompt Engineering · The Large Language Model Bible Contribute to LLM-Bible

GRAD-SUM: Leveraging Gradient Summarization For Optimal Prompt Engineering

Austin Derek, Chartock Elliott. Arxiv 2024

[Paper]    
Applications Efficiency And Optimization Prompting RAG

Prompt engineering for large language models (LLMs) is often a manual time-intensive process that involves generating, evaluating, and refining prompts iteratively to ensure high-quality outputs. While there has been work on automating prompt engineering, the solutions generally are either tuned to specific tasks with given answers or are quite costly. We introduce GRAD-SUM, a scalable and flexible method for automatic prompt engineering that builds on gradient-based optimization techniques. Our approach incorporates user-defined task descriptions and evaluation criteria, and features a novel gradient summarization module to generalize feedback effectively. Our results demonstrate that GRAD-SUM consistently outperforms existing methods across various benchmarks, highlighting its versatility and effectiveness in automatic prompt optimization.

Similar Work