Awesome
ThoughtSource⚡
A framework for the science of machine thinking
Datasets • Tutorial notebook • Installation guide • Dataset Annotator
ThoughtSource is a central, open resource and community centered on data and tools for chain-of-thought reasoning in large language models (Wei 2022). Our long-term goal is to enable trustworthy and robust reasoning in advanced AI systems for driving scientific research and medical practice.
<p align="center"> <img alt="ThoughtSource overview 3" src="./resources/images/thoughtsource-overview-3.svg"> </p>📄 Pre-print: Ott et al. "ThoughtSource: A central hub for large language model reasoning data", arXiv, 2023
📄 Pre-print: Hebenstreit et al. "An automatically discovered chain-of-thought prompt generalizes to novel models and datasets", arXiv, 2023
Workflow
<p align="center"> <img alt="ThoughtSource overview 1" src="./resources/images/thoughtsource-overview-1.svg"> <img alt="ThoughtSource overview 2" src="./resources/images/thoughtsource-overview-2.svg"> </p>Available datasets
Our dataloaders allow you to access the following datasets in a standardized chain-of-thought format. The dataloaders create objects in the Hugging Face 🤗 Datasets format. We (sometimes extensively) post-processed the source datasets in different ways to create more coherent reasoning chains.
<p align="center"> Datasets can be <a href="http://thought.samwald.info/"><b>browsed online through the Dataset Viewer 🔎</b></a> </p>
General question answering
-
commonsense_qa: Multiple-choice commonsense knowledge question answering dataset (Talmor 2018, License: MIT). Reasoning chains from three different sources are included:
- Human-generated reasoning chains derived from the ECQA dataset (Aggarwal 2021) for train and validation split. Used as gold standard. License: Community Data License Agreements Sharing license 1.0.
- AI-generated (few-shot prompting) reasoning chains from Wei 2022. Only available for validation split. License: Unknown
- AI-generated (zero-shot prompting) generated reasoning chains from Kojima 2022. Only available for validation split. License: Unknown
-
strategy_qa: General-domain question-answering data from the StrategyQA dataset, reasoning chains are derived from original dataset. (Geva 2021). License: MIT.
- Human-generated reasoning chains derived from the original dataset for train split. Used as gold standard. License: MIT.
- AI-generated (few-shot) reasoning chains from Wei 2022. Only available for train split. License: Unknown
- AI-generated (zero-shot) generated reasoning chains from Kojima 2022. Only available for train split. License: Unknown
-
qed: General-domain question-answering data and justifications from the QED dataset (Lamm 2020). License: CC BY-SA 3.0.
Scientific / medical question answering
- worldtree: Scientific question-answering data from the WorldTree v2 dataset (Xie 2020). Human-generated reasoning chains derived from the original dataset. License: AI2 Mercury.
- entailment_bank: Science exam questions with expert-authored explanations from the EntailmentBank dataset (Dalvi 2022). Human-generated reasoning chains derived from the original dataset. License: CC BY 4.0. (Note: significant overlap with worldtree v2)
- open_book_qa: Scientific question-answering modeled after open book exams for assessing human understanding from the OpenBookQA dataset (Mihaylov 2018). Human-generated reasoning chains derived from the original dataset. License: Apache License 2.0.
- med_qa (USMLE subset): Free-form multiple-choice OpenQA dataset containing questions from medical board exams in US (USMLE). Note: the original MedQA dataset also provides Chinese-language data, which are currently not included. (Jin 2020). License: MIT. <br> Additionally the dataset is also available in an open-answer version. (Nair 2023). License: MIT.
- AI-generated (zero-shot) reasoning chains derived from Liévin 2022. Only available for the test split, only US questions. License: Unknown.
- medmc_qa: Multiple-Choice Question Answering dataset containing real-world medical entrance exam questions from the All India Institute of Medical Sciences (AIIMS PG) and National Eligibility cum Entrance Test (NEET PG). (Pal 2022). License: MIT.
- Human-generated reasoning chains derived from the original dataset for ~85% of train and validation split. Used as gold standard. License: MIT.
- AI-generated (zero-shot) reasoning chains derived from Liévin 2022. Only available for 1000 samples from the validation split. License: CC-BY.
- mmlu: (Massive Multitask Language Understanding) is a compendium of 57 distinct question-and-answer tasks. Included are the selected six subjects related to medicine: anatomy, clinical knowledge, college biology, college medicine, medical genetics, and professional medicine. License: MIT.
- pubmed_qa: QA dataset containing biomedical questions extracted from PubMed abstracts that can be answered with yes/no/maybe (Jin 2019). License: MIT.
- Human-generated reasoning chains derived from the original dataset. Used as gold standard. License: MIT.
- AI-generated (zero-shot) reasoning chains derived from Liévin 2022. Only available for the test split. License: CC-BY.
Math word problems
- aqua: Math word problems from the AQUA-RAT (Algebra Question Answering with Rationales) dataset (Ling 2017). Reasoning chains derived from the original dataset. License: Apache 2.0.
- asdiv: Math word problems from the Academia Sinica Diverse MWP dataset (Miao 2020). Reasoning chains derived from the original dataset. License: CC BY-NC 4.0.
- gsm8k: Math word problems from the GSM8K dataset (Cobbe 2021). Reasoning chains derived from the original dataset. License: MIT.
- mawps: Math word problems from MAWPS, the Math Word Problem Repository dataset (Koncel-Kedziorski 2016). Reasoning chains derived from the original dataset. License: MIT.
- svamp: Math word problems. Source: SVAMP (Patel 2021). Reasoning chains derived from the original dataset. License: MIT.
Collections of datasets
For quick and economic formative evaluation of CoT reasoning, we combined random examples of the above datasets to collections.
- ThoughtSource_33 (Hebenstreit 2023) is a collection made up of 33 samples each from Commonsense QA, MedQA (USMLE), MedMCQA, OpenBookQA, StrategyQA and WorldTree V2. We generated zero-shot CoTs with ten different prompting strategies , each employed by six models: davinci-002, davinci-003, GPT-3.5-turbo, GPT-4, Flan-T5-XXL and Cohere's command-xlarge-nightly. The data can easily be accessed:
collection = Collection.load_thoughtsource_33()
We are working on collecting and generating additional datasets, and on further improving the quality of existing datasets (see dataset issues). We welcome suggestions for the inclusion of other datasets.
We welcome dataset contributions! 👉 Have a look at our contribution guide!
Annotator
<p align="center"> <img alt="Demonstration of the annotator tool" src="./resources/images/annotator-demo.webp" width="80%">The annotator allows for highlighting similarities between different generated reasoning chains, making it easier to spot strenghts and weaknesses and to select best results.
</p><p align="center"> <a href="http://thought.samwald.info:3000/"><b> Use the web-based annotator 📝</b></a><br/> To try out the annotator, simply type in your name and load this<a href="https://github.com/OpenBioLink/ThoughtSource/blob/main/notebooks/worldtree_10.json" target="_blank"> example file</a> </p>
<br/>
Installation and code structure
Installation
execute in terminal line by line:
git clone git@github.com:OpenBioLink/ThoughtSource.git
cd ThoughtSource
# install pip and virtualenv
sudo apt install python3-pip
sudo apt install python3-venv
# create and activate virtual environment
python3 -m venv venv
source ./venv/bin/activate
# install requirements and API packages
pip install -e ./libs/cot[api]
Applications
-
annotator: Web-based tool for annotating chain-of-thought data.
-
dataset-viewer: Streamlit application for browsing ThoughtSource datasets
Libraries
- cot:
- dataloader: Creating and processing of ThoughtSource datasets (based on the Hugging Face 🤗 Datasets library).
- generate: Generating reasoning chains with a wide variety of language models (currently OpenAI and models on Hugging Face hub)
- evaluate: Evaluate the performance of predictions extracted using generated reasoning chains
# 1) Dataset loading and selecting a random sample
collection = Collection(["worldtree"], verbose=False)
collection = collection.select(split="train", number_samples=10)
# 2) Language Model generates chains of thought and then extracts answers
config={
"instruction_keys": ['qa-01'], # "Answer the following question through step-by-step reasoning."
"cot_trigger_keys": ['kojima-01'], # "Answer: Let's think step by step."
"answer_extraction_keys": ['kojima-A-D'], # "Therefore, among A through D, the answer is"
"api_service": "huggingface_hub",
"engine": "google/flan-t5-xl",
"warn": False,
"verbose": False,
}
collection.generate(config=config)
# 3) Performance evaluation
collection.evaluate()
{'accuracy': {'qa-01_kojima-01_kojima-A-D': 0.6}}
<p align="center"> 👉 See the <a href="https://github.com/OpenBioLink/ThoughtSource/blob/main/notebooks/tutorial.ipynb/"><b>tutorial notebook</b></a> for more code examples. </p>
Citation
@misc{https://doi.org/10.48550/arxiv.2301.11596,
doi = {10.48550/ARXIV.2301.11596},
url = {https://arxiv.org/abs/2301.11596},
author = {Ott, Simon and Hebenstreit, Konstantin and Liévin, Valentin and Hother, Christoffer Egeberg and Moradi, Milad and Mayrhauser, Maximilian and Praas, Robert and Winther, Ole and Samwald, Matthias},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {ThoughtSource: A central hub for large language model reasoning data},
publisher = {arXiv},
year = {2023},
copyright = {Creative Commons Attribution 4.0 International}
}
Versioning
All updates/changes to datasets are explicitly mentioned in bold.
<details> <summary>1.0.0 (2023-07-11)</summary>- Released ThoughtSource_33 collection with 60 reasoning chains for each item:
Collection.load_thoughtsource_33()
- Added an option for creating chained commands
- Added chat option of gpt
- Added filtering functions for already created chain-of-thoughts
- Added new datasets: MMLU (six medical subsets) and open-ended question version of MedQA
- Added a function to select which generated CoTs to keep after loading:
collection.select_generated_cots(author="thoughtsource")
- Improved evaluation function
- Added a function to load ThoughtSource100 collection:
Collection.load_thoughtsource_100()
- Released ThoughtSource_100 collection with reasoning chains from GPT-text-davinci-003, flan-t5-xxl, and cohere's command-xl
- Updated annotator tool for correct data schema (this might result in errors loading old datasets, when loading from json files)
- Pubmed_qa: Included "LONG_ANSWER" from origin schema as "cot" in ThoughtSource schema
- Initial release after Twitter announcement of project