Dogerm: Equipping Reward Models With Domain Knowledge Through Model Merging · The Large Language Model Bible Contribute to LLM-Bible

Dogerm: Equipping Reward Models With Domain Knowledge Through Model Merging

Lin Tzu-han, Li Chen-an, Lee Hung-yi, Chen Yun-nung. Arxiv 2024

[Paper]    
Agentic Merging Reinforcement Learning Tools Training Techniques

Reinforcement learning from human feedback (RLHF) is a popular strategy for aligning large language models (LLMs) with desired behaviors. Reward modeling is a crucial step in RLHF. However, collecting paired preference data for training reward models is often costly and time-consuming, especially for domain-specific preferences requiring expert annotation. To address this challenge, we propose the \textbf{Do}main knowled\textbf{ge} merged \textbf{R}eward \textbf{M}odel (DogeRM), a novel framework that integrates domain-specific knowledge into a general reward model by model merging. The experiments demonstrate that DogeRM enhances performance across different benchmarks and provide a detailed analysis showcasing the effects of model merging, showing the great potential of facilitating model alignment.

Similar Work