Extracting Paragraphs From LLM Token Activations · The Large Language Model Bible Contribute to LLM-Bible

Extracting Paragraphs From LLM Token Activations

Pochinkov Nicholas, Benoit Angelo, Agarwal Lovkush, Majid Zainab Ali, Ter-minassian Lucile. Arxiv 2024

[Paper]    
RAG

Generative large language models (LLMs) excel in natural language processing tasks, yet their inner workings remain underexplored beyond token-level predictions. This study investigates the degree to which these models decide the content of a paragraph at its onset, shedding light on their contextual understanding. By examining the information encoded in single-token activations, specifically the “\textbackslash n\textbackslash n” double newline token, we demonstrate that patching these activations can transfer significant information about the context of the following paragraph, providing further insights into the model’s capacity to plan ahead.

Similar Work