A Critical Evaluation Of AI Feedback For Aligning Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

A Critical Evaluation Of AI Feedback For Aligning Large Language Models

Sharma Archit, Keh Sedrick, Mitchell Eric, Finn Chelsea, Arora Kushal, Kollar Thomas. Arxiv 2024

[Paper]    
Agentic Fine Tuning GPT Interpretability And Explainability Model Architecture Pretraining Methods Reinforcement Learning Training Techniques

Reinforcement learning with AI feedback (RLAIF) is a popular paradigm for improving the instruction-following abilities of powerful pre-trained language models. RLAIF first performs supervised fine-tuning (SFT) using demonstrations from a teacher model and then further fine-tunes the model with reinforcement learning (RL), using feedback from a critic model. While recent popular open-source models have demonstrated substantial improvements in performance from the RL step, in this paper we question whether the complexity of this RL step is truly warranted for AI feedback. We show that the improvements of the RL step are virtually entirely due to the widespread practice of using a weaker teacher model (e.g. GPT-3.5) for SFT data collection than the critic (e.g., GPT-4) used for AI feedback generation. Specifically, we show that simple supervised fine-tuning with GPT-4 as the teacher outperforms existing RLAIF pipelines. More generally, we find that the gains from RLAIF vary substantially across base model families, test-time evaluation protocols, and critic models. Finally, we provide a mechanistic explanation for when SFT may outperform the full two-step RLAIF pipeline as well as suggestions for making RLAIF maximally useful in practice.

Similar Work