Rethinking The Role Of Demonstrations: What Makes In-context Learning Work? · The Large Language Model Bible Contribute to LLM-Bible

Rethinking The Role Of Demonstrations: What Makes In-context Learning Work?

Min Sewon, Lyu Xinxi, Holtzman Ari, Artetxe Mikel, Lewis Mike, Hajishirzi Hannaneh, Zettlemoyer Luke. Arxiv 2022

[Paper]    
GPT In Context Learning Model Architecture Prompting

Large language models (LMs) are able to in-context learn – perform a new task via inference alone by conditioning on a few input-label pairs (demonstrations) and making predictions for new inputs. However, there has been little understanding of how the model learns and which aspects of the demonstrations contribute to end task performance. In this paper, we show that ground truth demonstrations are in fact not required – randomly replacing labels in the demonstrations barely hurts performance on a range of classification and multi-choce tasks, consistently over 12 different models including GPT-3. Instead, we find that other aspects of the demonstrations are the key drivers of end task performance, including the fact that they provide a few examples of (1) the label space, (2) the distribution of the input text, and (3) the overall format of the sequence. Together, our analysis provides a new way of understanding how and why in-context learning works, while opening up new questions about how much can be learned from large language models through inference alone.

Similar Work