In-context Learning And Induction Heads · The Large Language Model Bible Contribute to LLM-Bible

In-context Learning And Induction Heads

Catherine Olsson et al.. Arxiv 2022 – 57 citations

[Paper]    
Training Techniques Transformer Attention Mechanism In-Context Learning Model Architecture

“Induction heads” are attention heads that implement a simple algorithm to complete token sequences like [A][B] … [A] -> [B]. In this work, we present preliminary and indirect evidence for a hypothesis that induction heads might constitute the mechanism for the majority of all “in-context learning” in large transformer models (i.e. decreasing loss at increasing token indices). We find that induction heads develop at precisely the same point as a sudden sharp increase in in-context learning ability, visible as a bump in the training loss. We present six complementary lines of evidence, arguing that induction heads may be the mechanistic source of general in-context learning in transformer models of any size. For small attention-only models, we present strong, causal evidence; for larger models with MLPs, we present correlational evidence.

Similar Work