Llasmol: Advancing Large Language Models For Chemistry With A Large-scale, Comprehensive, High-quality Instruction Tuning Dataset · The Large Language Model Bible Contribute to LLM-Bible

Llasmol: Advancing Large Language Models For Chemistry With A Large-scale, Comprehensive, High-quality Instruction Tuning Dataset

Yu Botao, Baker Frazier N., Chen Ziqi, Ning Xia, Sun Huan. Arxiv 2024

[Paper]    
GPT Model Architecture RAG Training Techniques Uncategorized

Chemistry plays a crucial role in many domains, such as drug discovery and material science. While large language models (LLMs) such as GPT-4 exhibit remarkable capabilities on natural language processing tasks, existing research indicates that their performance on chemistry tasks is discouragingly low. In this paper, however, we demonstrate that our developed LLMs can achieve very strong results on a comprehensive set of chemistry tasks, outperforming the most advanced GPT-4 and Claude 3 Opus by a substantial margin. To accomplish this, we propose SMolInstruct, a large-scale, comprehensive, and high-quality dataset for instruction tuning. It contains 14 selected chemistry tasks and over three million samples, laying a solid foundation for training and evaluating LLMs for chemistry. Using SMolInstruct, we fine-tune a set of open-source LLMs, among which, we find that Mistral serves as the best base model for chemistry tasks. Our analysis further demonstrates the critical role of the proposed dataset in driving the performance improvements.

Similar Work