In-context Learning Demonstration Selection Via Influence Analysis · The Large Language Model Bible Contribute to LLM-Bible

In-context Learning Demonstration Selection Via Influence Analysis

S. Vinay M., Van Minh-hao, Wu Xintao. Arxiv 2024

[Paper]    
Few Shot Fine Tuning In Context Learning Pretraining Methods Prompting Reinforcement Learning Training Techniques

Large Language Models (LLMs) have showcased their In-Context Learning (ICL) capabilities, enabling few-shot learning without the need for gradient updates. Despite its advantages, the effectiveness of ICL heavily depends on the choice of demonstrations. Selecting the most effective demonstrations for ICL remains a significant research challenge. To tackle this issue, we propose a demonstration selection method named InfICL, which utilizes influence functions to analyze impacts of training samples. By identifying the most influential training samples as demonstrations, InfICL aims to enhance the ICL generalization performance. To keep InfICL cost-effective, we only use the LLM to generate sample input embeddings, avoiding expensive fine-tuning. Through empirical studies on various real-world datasets, we demonstrate advantages of InfICL compared to state-of-the-art baselines.

Similar Work