The Real, The Better: Aligning Large Language Models With Online Human Behaviors · The Large Language Model Bible Contribute to LLM-Bible

The Real, The Better: Aligning Large Language Models With Online Human Behaviors

Jiang Guanying, Yan Lingyong, Shi Haibo, Yin Dawei. Arxiv 2024

[Paper]    
Agentic Ethics And Bias RAG Reinforcement Learning Security Tools Training Techniques

Large language model alignment is widely used and studied to avoid LLM producing unhelpful and harmful responses. However, the lengthy training process and predefined preference bias hinder adaptation to online diverse human preferences. To this end, this paper proposes an alignment framework, called Reinforcement Learning with Human Behavior (RLHB), to align LLMs by directly leveraging real online human behaviors. By taking the generative adversarial framework, the generator is trained to respond following expected human behavior; while the discriminator tries to verify whether the triplets of query, response, and human behavior come from real online environments. Behavior modeling in natural-language form and the multi-model joint training mechanism enable an active and sustainable online alignment. Experimental results confirm the effectiveness of our proposed methods by both human and automatic evaluations.

Similar Work