Self-amplify: Improving Small Language Models With Self Post Hoc Explanations · The Large Language Model Bible Contribute to LLM-Bible

Self-amplify: Improving Small Language Models With Self Post Hoc Explanations

Bhan Milan, Vittaut Jean-noel, Chesneau Nicolas, Lesot Marie-jeanne. Arxiv 2024

[Paper]    
GPT Interpretability And Explainability Pretraining Methods Prompting RAG

Incorporating natural language rationales in the prompt and In-Context Learning (ICL) have led to a significant improvement of Large Language Models (LLMs) performance. However, generating high-quality rationales require human-annotation or the use of auxiliary proxy models. In this work, we propose Self-AMPLIFY to automatically generate rationales from post hoc explanation methods applied to Small Language Models (SLMs) to improve their own performance. Self-AMPLIFY is a 3-step method that targets samples, generates rationales and builds a final prompt to leverage ICL. Self-AMPLIFY performance is evaluated on four SLMs and five datasets requiring strong reasoning abilities. Self-AMPLIFY achieves good results against competitors, leading to strong accuracy improvement. Self-AMPLIFY is the first method to apply post hoc explanation methods to autoregressive language models to generate rationales to improve their own performance in a fully automated manner.

Similar Work