Retrieved In-context Principles From Previous Mistakes · The Large Language Model Bible Contribute to LLM-Bible

Retrieved In-context Principles From Previous Mistakes

Sun Hao, Jiang Yong, Wang Bo, Hou Yingyan, Zhang Yan, Xie Pengjun, Huang Fei. Arxiv 2024

[Paper]    
In Context Learning Prompting RAG Reinforcement Learning Tools

In-context learning (ICL) has been instrumental in adapting Large Language Models (LLMs) to downstream tasks using correct input-output examples. Recent advances have attempted to improve model performance through principles derived from mistakes, yet these approaches suffer from lack of customization and inadequate error coverage. To address these limitations, we propose Retrieved In-Context Principles (RICP), a novel teacher-student framework. In RICP, the teacher model analyzes mistakes from the student model to generate reasons and insights for preventing similar mistakes. These mistakes are clustered based on their underlying reasons for developing task-level principles, enhancing the error coverage of principles. During inference, the most relevant mistakes for each question are retrieved to create question-level principles, improving the customization of the provided guidance. RICP is orthogonal to existing prompting methods and does not require intervention from the teacher model during inference. Experimental results across seven reasoning benchmarks reveal that RICP effectively enhances performance when applied to various prompting strategies.

Similar Work