In-context Exemplars As Clues To Retrieving From Large Associative Memory · The Large Language Model Bible Contribute to LLM-Bible

In-context Exemplars As Clues To Retrieving From Large Associative Memory

Zhao Jiachen. Arxiv 2023

[Paper]    
In Context Learning Prompting Tools Training Techniques

Recently, large language models (LLMs) have made remarkable progress in natural language processing. The most representative ability of LLMs is in-context learning (ICL), which enables LLMs to learn patterns from in-context exemplars without training. The performance of ICL greatly depends on the exemplars used. However, how to choose exemplars remains unclear due to the lack of understanding of how in-context learning works. In this paper, we present a novel perspective on ICL by conceptualizing it as contextual retrieval from a model of associative memory. We establish a theoretical framework of ICL based on Hopfield Networks. Based on our framework, we look into how in-context exemplars influence the performance of ICL and propose more efficient active exemplar selection. Our study sheds new light on the mechanism of ICL by connecting it to memory retrieval, with potential implications for advancing the understanding of LLMs.

Similar Work