Sciagent: Tool-augmented Language Models For Scientific Reasoning · The Large Language Model Bible Contribute to LLM-Bible

Sciagent: Tool-augmented Language Models For Scientific Reasoning

Ma Yubo, Gou Zhibin, Hao Junheng, Xu Ruochen, Wang Shuohang, Pan Liangming, Yang Yujiu, Cao Yixin, Sun Aixin, Awadalla Hany, Chen Weizhu. Arxiv 2024

[Paper]    
Agentic GPT Model Architecture Tools Training Techniques

Scientific reasoning poses an excessive challenge for even the most advanced Large Language Models (LLMs). To make this task more practical and solvable for LLMs, we introduce a new task setting named tool-augmented scientific reasoning. This setting supplements LLMs with scalable toolsets, and shifts the focus from pursuing an omniscient problem solver to a proficient tool-user. To facilitate the research of such setting, we construct a tool-augmented training corpus named MathFunc which encompasses over 30,000 samples and roughly 6,000 tools. Building on MathFunc, we develop SciAgent to retrieve, understand and, if necessary, use tools for scientific problem solving. Additionally, we craft a benchmark, SciToolBench, spanning five scientific domains to evaluate LLMs’ abilities with tool assistance. Extensive experiments on SciToolBench confirm the effectiveness of SciAgent. Notably, SciAgent-Mistral-7B surpasses other LLMs with the same size by more than 13% in absolute accuracy. Furthermore, SciAgent-DeepMath-7B shows much superior performance than ChatGPT.

Similar Work