Smileyllama: Modifying Large Language Models For Directed Chemical Space Exploration · The Large Language Model Bible Contribute to LLM-Bible

Smileyllama: Modifying Large Language Models For Directed Chemical Space Exploration

Cavanagh Joseph M., Sun Kunyang, Gritsevskiy Andrew, Bagni Dorian, Bannister Thomas D., Head-gordon Teresa. Arxiv 2024

[Paper]    
Efficiency And Optimization Fine Tuning Pretraining Methods Prompting Reinforcement Learning Tools Training Techniques

Here we show that a Large Language Model (LLM) can serve as a foundation model for a Chemical Language Model (CLM) which performs at or above the level of CLMs trained solely on chemical SMILES string data. Using supervised fine-tuning (SFT) and direct preference optimization (DPO) on the open-source Llama LLM, we demonstrate that we can train an LLM to respond to prompts such as generating molecules with properties of interest to drug development. This overall framework allows an LLM to not just be a chatbot client for chemistry and materials tasks, but can be adapted to speak more directly as a CLM which can generate molecules with user-specified properties.

Similar Work