Assessment Of Multimodal Large Language Models In Alignment With Human Values · The Large Language Model Bible Contribute to LLM-Bible

Assessment Of Multimodal Large Language Models In Alignment With Human Values

Shi Zhelun, Wang Zhipin, Fan Hongxing, Zhang Zaibin, Li Lijun, Zhang Yongting, Yin Zhenfei, Sheng Lu, Qiao Yu, Shao Jing. Arxiv 2024

[Paper]    
Multimodal Models Reinforcement Learning

Large Language Models (LLMs) aim to serve as versatile assistants aligned with human values, as defined by the principles of being helpful, honest, and harmless (hhh). However, in terms of Multimodal Large Language Models (MLLMs), despite their commendable performance in perception and reasoning tasks, their alignment with human values remains largely unexplored, given the complexity of defining hhh dimensions in the visual world and the difficulty in collecting relevant data that accurately mirrors real-world situations. To address this gap, we introduce Ch3Ef, a Compreh3ensive Evaluation dataset and strategy for assessing alignment with human expectations. Ch3Ef dataset contains 1002 human-annotated data samples, covering 12 domains and 46 tasks based on the hhh principle. We also present a unified evaluation strategy supporting assessment across various scenarios and different perspectives. Based on the evaluation results, we summarize over 10 key findings that deepen the understanding of MLLM capabilities, limitations, and the dynamic relationships between evaluation levels, guiding future advancements in the field.

Similar Work