Efficacy Of Machine-generated Instructions · The Large Language Model Bible Contribute to LLM-Bible

Efficacy Of Machine-generated Instructions

Gulati Samaksh, Verma Anshit, Parmar Manoj, Chaudhary Palash. Arxiv 2023

[Paper]    
BERT GPT Model Architecture

Large “instruction-tuned” language models (i.e., finetuned to respond to instructions) have demonstrated a remarkable ability to generalize zero-shot to new tasks. Nevertheless, they depend heavily on human-written instruction data that is often limited in quantity, diversity, and creativity, therefore hindering the generality of the tuned model. We conducted a quantitative study to figure out the efficacy of machine-generated annotations, where we compare the results of a fine-tuned BERT model with human v/s machine-generated annotations. Applying our methods to the vanilla GPT-3 model, we saw that machine generated annotations were 78.54% correct and the fine-tuned model achieved a 96.01% model performance compared to the performance with human-labelled annotations. This result shows that machine-generated annotations are a resource and cost effective way to fine-tune down-stream models.

Similar Work