Apt-pipe: A Prompt-tuning Tool For Social Data Annotation Using Chatgpt · The Large Language Model Bible Contribute to LLM-Bible

Apt-pipe: A Prompt-tuning Tool For Social Data Annotation Using Chatgpt

Zhu Yiming, Yin Zhizhuo, Tyson Gareth, Haq Ehsan-ul, Lee Lik-hang, Hui Pan. Arxiv 2024

[Paper]    
Applications GPT Model Architecture Prompting RAG Reinforcement Learning Tools

Recent research has highlighted the potential of LLM applications, like ChatGPT, for performing label annotation on social computing text. However, it is already well known that performance hinges on the quality of the input prompts. To address this, there has been a flurry of research into prompt tuning – techniques and guidelines that attempt to improve the quality of prompts. Yet these largely rely on manual effort and prior knowledge of the dataset being annotated. To address this limitation, we propose APT-Pipe, an automated prompt-tuning pipeline. APT-Pipe aims to automatically tune prompts to enhance ChatGPT’s text classification performance on any given dataset. We implement APT-Pipe and test it across twelve distinct text classification datasets. We find that prompts tuned by APT-Pipe help ChatGPT achieve higher weighted F1-score on nine out of twelve experimented datasets, with an improvement of 7.01% on average. We further highlight APT-Pipe’s flexibility as a framework by showing how it can be extended to support additional tuning mechanisms.

Similar Work