Sciassess: Benchmarking LLM Proficiency In Scientific Literature Analysis · The Large Language Model Bible Contribute to LLM-Bible

Sciassess: Benchmarking LLM Proficiency In Scientific Literature Analysis

Cai Hengxing, Cai Xiaochen, Chang Junhan, Li Sihang, Yao Lin, Wang Changxin, Gao Zhifeng, Wang Hongshuai, Li Yongge, Lin Mujie, Yang Shuwen, Wang Jiankun, Xu Mingjun, Huang Jin, Xi Fang, Zhuang Jiaxi, Yin Yuqi, Li Yaqi, Chen Changhong, Cheng Zheng, Zhao Zifeng, Zhang Linfeng, Ke Guolin. Arxiv 2024

[Paper]    
Applications GPT Model Architecture Multimodal Models Reinforcement Learning

Recent breakthroughs in Large Language Models (LLMs) have revolutionized natural language understanding and generation, sparking significant interest in applying them to scientific literature analysis. However, existing benchmarks fail to adequately evaluate the proficiency of LLMs in this domain, particularly in scenarios requiring higher-level abilities beyond mere memorization and the handling of multimodal data. In response to this gap, we introduce SciAssess, a benchmark specifically designed for the comprehensive evaluation of LLMs in scientific literature analysis. SciAssess aims to thoroughly assess the efficacy of LLMs by focusing on their capabilities in Memorization (L1), Comprehension (L2), and Analysis \& Reasoning (L3). It encompasses a variety of tasks drawn from diverse scientific fields, including fundamental science, alloy materials, biomedicine, drug discovery, and organic materials. To ensure the reliability of SciAssess, rigorous quality control measures have been implemented, ensuring accuracy, anonymization, and compliance with copyright standards. SciAssess evaluates 11 LLMs, including GPT, Claude, and Gemini, highlighting their strengths and areas for improvement. This evaluation supports the ongoing development of LLM applications in the analysis of scientific literature. SciAssess and its resources are available at \url{https://sci-assess.github.io/}.

Similar Work