In-context Learning Dynamics With Random Binary Sequences · The Large Language Model Bible Contribute to LLM-Bible

In-context Learning Dynamics With Random Binary Sequences

Bigelow Eric J., Lubana Ekdeep Singh, Dick Robert P., Tanaka Hidenori, Ullman Tomer D.. Arxiv 2023

[Paper]    
GPT In Context Learning Model Architecture Prompting Reinforcement Learning Tools

Large language models (LLMs) trained on huge corpora of text datasets demonstrate intriguing capabilities, achieving state-of-the-art performance on tasks they were not explicitly trained for. The precise nature of LLM capabilities is often mysterious, and different prompts can elicit different capabilities through in-context learning. We propose a framework that enables us to analyze in-context learning dynamics to understand latent concepts underlying LLMs’ behavioral patterns. This provides a more nuanced understanding than success-or-failure evaluation benchmarks, but does not require observing internal activations as a mechanistic interpretation of circuits would. Inspired by the cognitive science of human randomness perception, we use random binary sequences as context and study dynamics of in-context learning by manipulating properties of context data, such as sequence length. In the latest GPT-3.5+ models, we find emergent abilities to generate seemingly random numbers and learn basic formal languages, with striking in-context learning dynamics where model outputs transition sharply from seemingly random behaviors to deterministic repetition.

Similar Work