ICL Markup: Structuring In-context Learning Using Soft-token Tags · The Large Language Model Bible Contribute to LLM-Bible

ICL Markup: Structuring In-context Learning Using Soft-token Tags

Marc-etienne Brunet, Ashton Anderson, Richard Zemel. Arxiv 2023

[Paper]    
Applications Few Shot Fine Tuning In Context Learning Pretraining Methods Prompting Reinforcement Learning Tools Training Techniques

Large pretrained language models (LLMs) can be rapidly adapted to a wide variety of tasks via a text-to-text approach, where the instruction and input are fed to the model in natural language. Combined with in-context learning (ICL), this paradigm is impressively flexible and powerful. However, it also burdens users with an overwhelming number of choices, many of them arbitrary. Inspired by markup languages like HTML, we contribute a method of using soft-token tags to compose prompt templates. This approach reduces arbitrary decisions and streamlines the application of ICL. Our method is a form of meta-learning for ICL; it learns these tags in advance during a parameter-efficient fine-tuning ``warm-up’’ process. The tags can subsequently be used in templates for ICL on new, unseen tasks without any additional fine-tuning. Our experiments with this approach yield promising initial results, improving LLM performance on important enterprise applications such as few-shot and open-world intent detection, as well as text classification in news and legal domains.

Similar Work