Climategpt: Towards AI Synthesizing Interdisciplinary Research On Climate Change · The Large Language Model Bible Contribute to LLM-Bible

Climategpt: Towards AI Synthesizing Interdisciplinary Research On Climate Change

Thulke David, Gao Yingbo, Pelser Petrus, Brune Rein, Jalota Rricha, Fok Floris, Ramos Michael, Van Wyk Ian, Nasir Abdallah, Goldstein Hayden, Tragemann Taylor, Nguyen Katie, Fowler Ariana, Stanco Andrew, Gabriel Jon, Taylor Jordan, Moro Dean, Tsymbalov Evgenii, De Waal Juliette, Matusov Evgeny, Yaghi Mudar, Shihadah Mohammad, Ney Hermann, Dugast Christian, Dotan Jonathan, Erasmus Daniel. Arxiv 2024

[Paper]    
Applications GPT Model Architecture Training Techniques

This paper introduces ClimateGPT, a model family of domain-specific large language models that synthesize interdisciplinary research on climate change. We trained two 7B models from scratch on a science-oriented dataset of 300B tokens. For the first model, the 4.2B domain-specific tokens were included during pre-training and the second was adapted to the climate domain after pre-training. Additionally, ClimateGPT-7B, 13B and 70B are continuously pre-trained from Llama~2 on a domain-specific dataset of 4.2B tokens. Each model is instruction fine-tuned on a high-quality and human-generated domain-specific dataset that has been created in close cooperation with climate scientists. To reduce the number of hallucinations, we optimize the model for retrieval augmentation and propose a hierarchical retrieval strategy. To increase the accessibility of our model to non-English speakers, we propose to make use of cascaded machine translation and show that this approach can perform comparably to natively multilingual models while being easier to scale to a large number of languages. Further, to address the intrinsic interdisciplinary aspect of climate change we consider different research perspectives. Therefore, the model can produce in-depth answers focusing on different perspectives in addition to an overall answer. We propose a suite of automatic climate-specific benchmarks to evaluate LLMs. On these benchmarks, ClimateGPT-7B performs on par with the ten times larger Llama-2-70B Chat model while not degrading results on general domain benchmarks. Our human evaluation confirms the trends we saw in our benchmarks. All models were trained and evaluated using renewable energy and are released publicly.

Similar Work