Awesome
Datasets | Website | Raw Data | OpenReview
SustainBench: Benchmarks for Monitoring the Sustainable Development Goals with Machine Learning
Christopher Yeh, Chenlin Meng, Sherrie Wang, Anne Driscoll, Erik Rozi, Patrick Liu, Jihyeon Lee, Marshall Burke, David B. Lobell, Stefano Ermon
California Institute of Technology, Stanford University, and UC Berkeley
SustainBench is a collection of 15 benchmark tasks across 7 SDGs, including tasks related to economic development, agriculture, health, education, water and sanitation, climate action, and life on land. Datasets for 11 of the 15 tasks are released publicly for the first time. Our goals for SustainBench are to
- lower the barriers to entry for the machine learning community to contribute to measuring and achieving the SDGs;
- provide standard benchmarks for evaluating machine learning models on tasks across a variety of SDGs; and
- encourage the development of novel machine learning methods where improved model performance facilitates progress towards the SDGs.
Table of Contents
- Overview
- Dataloaders
- Running Baseline Models
- Dataset Preprocessing
- Computing Requirements
- Code Formatting and Type Checking
- Citation
Overview
SustainBench provides datasets and standardized benchmarks for 15 SDG-related tasks, listed below. Details for each dataset and task can be found in our paper and on our website. The raw data can be downloaded from Google Drive and is released under a CC-BY-SA 4.0 license.
<img src="https://github.com/sustainlab-group/sustainbench/blob/gh-pages/assets/images/fig1.png" width="600">- SDG 1: No Poverty
- SDG 2: Zero Hunger
- SDG 3: Good Health and Well-being
- SDG 4: Quality Education
- Task 4A: Women educational attainment
- SDG 6: Clean Water and Sanitation
- SDG 13: Climate Action
- Task 13A: Brick kiln classification
- SDG 15: Life on Land
Dataloaders
For each dataset, we provide Python dataloaders that load the data as PyTorch tensors. Please see the sustainbench
folder as well as our website for detailed documentation.
Running Baseline Models
We provide baseline models for many of the benchmark tasks included in SustainBench. See the baseline_models
folder for the code and detailed instructions to reproduce our results.
Dataset Preprocessing
11 of the 15 SustainBench benchmark tasks involve data that is being publicly released for the first time. We release the processed versions of our datasets on Google Drive. However, we also provide code and detailed instructions for how we preprocessed the datasets in the dataset_preprocessing
folder. You do NOT need anything from the dataset_preprocessing
folder for downloading the processed datasets or running our baseline models.
Computing Requirements
This code was tested on a system with the following specifications:
- operating system: Ubuntu 16.04.7 LTS
- CPU: Intel(R) Xeon(R) CPU E5-2620 v4
- memory (RAM): 125 GB
- disk storage: 5 TB
- GPU: NVIDIA P100 GPU
The main software requirements are Python 3.7 with TensorFlow r1.15, PyTorch 1.9, and R 4.1. The complete list of required packages and library are listed in the two conda environment YAML files (env_create.yml
and env_bench.yml
), which are meant to be used with conda
(version 4.10). See here for instructions on installing conda via Miniconda. Once conda is installed, run one of the following commands to set up the desired conda environment:
conda env update -f env_create.yml --prune
conda env update -f env_bench.yml --prune
The conda environment files default to CPU-only packages. If you have a GPU, please comment/uncomment the appropriate lines in the environment files; you may need to also install CUDA 10 or 11 and cuDNN 7.
Code Formatting and Type Checking
This repo uses flake8 for Python linting and mypy for type-checking. Configuration files for each are included in this repo: .flake8
and mypy.ini
.
To run either code linting or type checking, set the current directory to the repo root directory. Then run any of the following commands:
# LINTING
# =======
# entire repo
flake8
# all modules within utils directory
flake8 utils
# a single module
flake8 path/to/module.py
# a jupyter notebook - ignore these error codes, in addition to the ignored codes in .flake8:
# - E305: expected 2 blank lines after class or function definition
# - E402: Module level import not at top of file
# - F404: from __future__ imports must occur at the beginning of the file
# - W391: Blank line at end of file
jupyter nbconvert path/to/notebook.ipynb --stdout --to script | flake8 - --extend-ignore=E305,E402,F404,W391
# TYPE CHECKING
# =============
# entire repo
mypy .
# all modules within utils directory
mypy -p utils
# a single module
mypy path/to/module.py
# a jupyter notebook
mypy -c "$(jupyter nbconvert path/to/notebook.ipynb --stdout --to script)"
Citation
Please cite this article as follows, or use the BibTeX entry below.
C. Yeh, C. Meng, S. Wang, A. Driscoll, E. Rozi, P. Liu, J. Lee, M. Burke, D. B. Lobell, and S. Ermon, "SustainBench: Benchmarks for Monitoring the Sustainable Development Goals with Machine Learning," in Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), Dec. 2021. [Online]. Available: https://openreview.net/forum?id=5HR3vCylqD.
@inproceedings{
yeh2021sustainbench,
title = {{SustainBench: Benchmarks for Monitoring the Sustainable Development Goals with Machine Learning}},
author = {Christopher Yeh and Chenlin Meng and Sherrie Wang and Anne Driscoll and Erik Rozi and Patrick Liu and Jihyeon Lee and Marshall Burke and David B. Lobell and Stefano Ermon},
booktitle = {Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year = {2021},
month = {12},
url = {https://openreview.net/forum?id=5HR3vCylqD}
}