Promptmix: A Class Boundary Augmentation Method For Large Language Model Distillation · The Large Language Model Bible Contribute to LLM-Bible

Promptmix: A Class Boundary Augmentation Method For Large Language Model Distillation

Sahu Gaurav, Vechtomova Olga, Bahdanau Dzmitry, Laradji Issam H.. Arxiv 2023

[Paper] [Code]    
BERT Distillation Efficiency And Optimization Few Shot GPT Has Code Model Architecture Prompting Reinforcement Learning Training Techniques

Data augmentation is a widely used technique to address the problem of text classification when there is a limited amount of training data. Recent work often tackles this problem using large language models (LLMs) like GPT3 that can generate new examples given already available ones. In this work, we propose a method to generate more helpful augmented data by utilizing the LLM’s abilities to follow instructions and perform few-shot classifications. Our specific PromptMix method consists of two steps: 1) generate challenging text augmentations near class boundaries; however, generating borderline examples increases the risk of false positives in the dataset, so we 2) relabel the text augmentations using a prompting-based LLM classifier to enhance the correctness of labels in the generated data. We evaluate the proposed method in challenging 2-shot and zero-shot settings on four text classification datasets: Banking77, TREC6, Subjectivity (SUBJ), and Twitter Complaints. Our experiments show that generating and, crucially, relabeling borderline examples facilitates the transfer of knowledge of a massive LLM like GPT3.5-turbo into smaller and cheaper classifiers like DistilBERT\({base}\) and BERT\({base}\). Furthermore, 2-shot PromptMix outperforms multiple 5-shot data augmentation methods on the four datasets. Our code is available at https://github.com/ServiceNow/PromptMix-EMNLP-2023.

Similar Work