Inference-time Policy Adapters (IPA): Tailoring Extreme-scale Lms Without Fine-tuning · The Large Language Model Bible Contribute to LLM-Bible

Inference-time Policy Adapters (IPA): Tailoring Extreme-scale Lms Without Fine-tuning

Lu Ximing, Brahman Faeze, West Peter, Jang Jaehun, Chandu Khyathi, Ravichander Abhilasha, Qin Lianhui, Ammanabrolu Prithviraj, Jiang Liwei, Ramnath Sahana, Dziri Nouha, Fisher Jillian, Lin Bill Yuchen, Hallinan Skyler, Ren Xiang, Welleck Sean, Choi Yejin. Arxiv 2023

[Paper]    
Agentic Applications Fine Tuning GPT Language Modeling Model Architecture Pretraining Methods Prompting Reinforcement Learning Training Techniques

While extreme-scale language models have demonstrated exceptional performance on a variety of language tasks, the degree of control over these language models through pure prompting can often be limited. Directly fine-tuning such language models can be effective for tailoring them, but it can be either extremely costly (e.g., GPT-3) or not even feasible for the broader community (e.g., GPT-4). We propose Inference-time Policy Adapters (IPA), which efficiently tailors a language model such as GPT-3 without fine-tuning it. IPA guides a large base model during decoding time through a lightweight policy adapter trained to optimize an arbitrary user objective with reinforcement learning. On five challenging text generation tasks, such as toxicity reduction and lexically constrained generation, IPA consistently brings significant improvements over off-the-shelf language models. It outperforms competitive baseline methods, sometimes even including expensive fine-tuning. In particular, tailoring GPT-2 with IPA can outperform GPT-3, while tailoring GPT-3 with IPA brings a major performance boost over GPT-3 (and sometimes even over GPT-4). Our promising results highlight the potential of IPA as a lightweight alternative to tailoring extreme-scale language models.

Similar Work