PAS: Data-efficient Plug-and-play Prompt Augmentation System · The Large Language Model Bible Contribute to LLM-Bible

PAS: Data-efficient Plug-and-play Prompt Augmentation System

Zheng Miao, Liang Hao, Yang Fan, Sun Haoze, Li Tianpeng, Xiong Lingchu, Zhang Yan, Wu Youzhen, Li Kun, Shen Yanjun, Lin Mingan, Zhang Tao, Dong Guosheng, Qiao Yujing, Fang Kun, Chen Weipeng, Cui Bin, Zhang Wentao, Zhou Zenan. Arxiv 2024

[Paper]    
Efficiency And Optimization Prompting RAG Reinforcement Learning

In recent years, the rise of Large Language Models (LLMs) has spurred a growing demand for plug-and-play AI systems. Among the various AI techniques, prompt engineering stands out as particularly significant. However, users often face challenges in writing prompts due to the steep learning curve and significant time investment, and existing automatic prompt engineering (APE) models can be difficult to use. To address this issue, we propose PAS, an LLM-based plug-and-play APE system. PAS utilizes LLMs trained on high-quality, automatically generated prompt complementary datasets, resulting in exceptional performance. In comprehensive benchmarks, PAS achieves state-of-the-art (SoTA) results compared to previous APE models, with an average improvement of 6.09 points. Moreover, PAS is highly efficient, achieving SoTA performance with only 9000 data points. Additionally, PAS can autonomously generate prompt augmentation data without requiring additional human labor. Its flexibility also allows it to be compatible with all existing LLMs and applicable to a wide range of tasks. PAS excels in human evaluations, underscoring its suitability as a plug-in for users. This combination of high performance, efficiency, and flexibility makes PAS a valuable system for enhancing the usability and effectiveness of LLMs through improved prompt engineering.

Similar Work