Thoughtsource: A Central Hub For Large Language Model Reasoning Data · The Large Language Model Bible Contribute to LLM-Bible

Thoughtsource: A Central Hub For Large Language Model Reasoning Data

Ott Simon, Hebenstreit Konstantin, Liévin Valentin, Hother Christoffer Egeberg, Moradi Milad, Mayrhauser Maximilian, Praas Robert, Winther Ole, Samwald Matthias. Scientific Data 2023

[Paper]    
Applications Ethics And Bias GPT Model Architecture Prompting Reinforcement Learning Tools Training Techniques

Large language models (LLMs) such as GPT-4 have recently demonstrated impressive results across a wide range of tasks. LLMs are still limited, however, in that they frequently fail at complex reasoning, their reasoning processes are opaque, they are prone to ‘hallucinate’ facts, and there are concerns about their underlying biases. Letting models verbalize reasoning steps as natural language, a technique known as chain-of-thought prompting, has recently been proposed as a way to address some of these issues. Here we present ThoughtSource, a meta-dataset and software library for chain-of-thought (CoT) reasoning. The goal of ThoughtSource is to improve future artificial intelligence systems by facilitating qualitative understanding of CoTs, enabling empirical evaluations, and providing training data. This first release of ThoughtSource integrates seven scientific/medical, three general-domain and five math word question answering datasets.

Similar Work