Aligning Large Language Models Via Fine-grained Supervision · The Large Language Model Bible Contribute to LLM-Bible

Aligning Large Language Models Via Fine-grained Supervision

Xu Dehong, Qiu Liang, Kim Minseok, Ladhak Faisal, Do Jaeyoung. Arxiv 2024

[Paper]    
Agentic Efficiency And Optimization Reinforcement Learning Training Techniques

Pre-trained large-scale language models (LLMs) excel at producing coherent articles, yet their outputs may be untruthful, toxic, or fail to align with user expectations. Current approaches focus on using reinforcement learning with human feedback (RLHF) to improve model alignment, which works by transforming coarse human preferences of LLM outputs into a feedback signal that guides the model learning process. However, because this approach operates on sequence-level feedback, it lacks the precision to identify the exact parts of the output affecting user preferences. To address this gap, we propose a method to enhance LLM alignment through fine-grained token-level supervision. Specifically, we ask annotators to minimally edit less preferred responses within the standard reward modeling dataset to make them more favorable, ensuring changes are made only where necessary while retaining most of the original content. The refined dataset is used to train a token-level reward model, which is then used for training our fine-grained Proximal Policy Optimization (PPO) model. Our experiment results demonstrate that this approach can achieve up to an absolute improvement of \(5.1%\) in LLM performance, in terms of win rate against the reference model, compared with the traditional PPO model.

Similar Work