Automated Multi-level Preference For Mllms · The Large Language Model Bible Contribute to LLM-Bible

Automated Multi-level Preference For Mllms

Zhang Mengxi, Wu Wenhao, Lu Yu, Song Yuxin, Rong Kang, Yao Huanjin, Zhao Jianbo, Liu Fanglong, Sun Yifan, Feng Haocheng, Wang Jingdong. Arxiv 2024

[Paper] [Code]    
Agentic Efficiency And Optimization Has Code Multimodal Models RAG Reinforcement Learning Tools

Current multimodal Large Language Models (MLLMs) suffer from ``hallucination’’, occasionally generating responses that are not grounded in the input images. To tackle this challenge, one promising path is to utilize reinforcement learning from human feedback (RLHF), which steers MLLMs towards learning superior responses while avoiding inferior ones. We rethink the common practice of using binary preferences (i.e., superior, inferior), and find that adopting multi-level preferences (e.g., superior, medium, inferior) is better for two benefits: 1) It narrows the gap between adjacent levels, thereby encouraging MLLMs to discern subtle differences. 2) It further integrates cross-level comparisons (beyond adjacent-level comparisons), thus providing a broader range of comparisons with hallucination examples. To verify our viewpoint, we present the Automated Multi-level Preference (AMP) framework for MLLMs. To facilitate this framework, we first develop an automated dataset generation pipeline that provides high-quality multi-level preference datasets without any human annotators. Furthermore, we design the Multi-level Direct Preference Optimization (MDPO) algorithm to robustly conduct complex multi-level preference learning. Additionally, we propose a new hallucination benchmark, MRHal-Bench. Extensive experiments across public hallucination and general benchmarks, as well as our MRHal-Bench, demonstrate the effectiveness of our proposed method. Code is available at https://github.com/takomc/amp.

Similar Work