Chemvlm: Exploring The Power Of Multimodal Large Language Models In Chemistry Area · The Large Language Model Bible Contribute to LLM-Bible

Chemvlm: Exploring The Power Of Multimodal Large Language Models In Chemistry Area

Li Junxian, Zhang Di, Wang Xunzhi, Hao Zeying, Lei Jingdi, Tan Qian, Zhou Cai, Liu Wei, Yang Yaotian, Xiong Xinrui, Wang Weiyun, Chen Zhe, Wang Wenhai, Li Wei, Zhang Shufei, Su Mao, Ouyang Wanli, Li Yuqiang, Zhou Dongzhan. Arxiv 2024

[Paper]    
Applications Multimodal Models

Large Language Models (LLMs) have achieved remarkable success and have been applied across various scientific fields, including chemistry. However, many chemical tasks require the processing of visual information, which cannot be successfully handled by existing chemical LLMs. This brings a growing need for models capable of integrating multimodal information in the chemical domain. In this paper, we introduce \textbf{ChemVLM}, an open-source chemical multimodal large language model specifically designed for chemical applications. ChemVLM is trained on a carefully curated bilingual multimodal dataset that enhances its ability to understand both textual and visual chemical information, including molecular structures, reactions, and chemistry examination questions. We develop three datasets for comprehensive evaluation, tailored to Chemical Optical Character Recognition (OCR), Multimodal Chemical Reasoning (MMCR), and Multimodal Molecule Understanding tasks. We benchmark ChemVLM against a range of open-source and proprietary multimodal large language models on various tasks. Experimental results demonstrate that ChemVLM achieves competitive performance across all evaluated tasks. Our model can be found at https://huggingface.co/AI4Chem/ChemVLM-26B.

Similar Work