Imitating Language Via Scalable Inverse Reinforcement Learning · The Large Language Model Bible Contribute to LLM-Bible

Imitating Language Via Scalable Inverse Reinforcement Learning

Wulfmeier Markus, Bloesch Michael, Vieillard Nino, Ahuja Arun, Bornschein Jorg, Huang Sandy, Sokolov Artem, Barnes Matt, Desjardins Guillaume, Bewley Alex, Bechtle Sarah Maria Elisabeth, Springenberg Jost Tobias, Momchev Nikola, Bachem Olivier, Geist Matthieu, Riedmiller Martin. Arxiv 2024

[Paper]    
Agentic Fine Tuning GPT Pretraining Methods Reinforcement Learning Training Techniques

The majority of language model training builds on imitation learning. It covers pretraining, supervised fine-tuning, and affects the starting conditions for reinforcement learning from human feedback (RLHF). The simplicity and scalability of maximum likelihood estimation (MLE) for next token prediction led to its role as predominant paradigm. However, the broader field of imitation learning can more effectively utilize the sequential structure underlying autoregressive generation. We focus on investigating the inverse reinforcement learning (IRL) perspective to imitation, extracting rewards and directly optimizing sequences instead of individual token likelihoods and evaluate its benefits for fine-tuning large language models. We provide a new angle, reformulating inverse soft-Q-learning as a temporal difference regularized extension of MLE. This creates a principled connection between MLE and IRL and allows trading off added complexity with increased performance and diversity of generations in the supervised fine-tuning (SFT) setting. We find clear advantages for IRL-based imitation, in particular for retaining diversity while maximizing task performance, rendering IRL a strong alternative on fixed SFT datasets even without online data generation. Our analysis of IRL-extracted reward functions further indicates benefits for more robust reward functions via tighter integration of supervised and preference-based LLM post-training.

Similar Work