Assessing Large Language Models On Climate Information · The Large Language Model Bible Contribute to LLM-Bible

Assessing Large Language Models On Climate Information

Bulian Jannis, Schäfer Mike S., Amini Afra, Lam Heidi, Ciaramita Massimiliano, Gaiarin Ben, Hübscher Michelle Chen, Buck Christian, Mede Niels G., Leippold Markus, Strauß Nadine. Proceedings of the 2023

[Paper]    
Reinforcement Learning Tools

As Large Language Models (LLMs) rise in popularity, it is necessary to assess their capability in critically relevant domains. We present a comprehensive evaluation framework, grounded in science communication research, to assess LLM responses to questions about climate change. Our framework emphasizes both presentational and epistemological adequacy, offering a fine-grained analysis of LLM generations spanning 8 dimensions and 30 issues. Our evaluation task is a real-world example of a growing number of challenging problems where AI can complement and lift human performance. We introduce a novel protocol for scalable oversight that relies on AI Assistance and raters with relevant education. We evaluate several recent LLMs on a set of diverse climate questions. Our results point to a significant gap between surface and epistemological qualities of LLMs in the realm of climate communication.

Similar Work