Home

Awesome

im2im-uq

<p align="center"> <a style="text-decoration:none !important;" href="https://arxiv.org/abs/2202.05265" alt="arXiv"> <img src="https://img.shields.io/badge/paper-arXiv-red" /> </a> <a style="text-decoration:none !important;" href="https://people.eecs.berkeley.edu/%7Eangelopoulos/" alt="website"> <img src="https://img.shields.io/badge/website-Berkeley-yellow" /> </a> <a style="text-decoration:none !important;" href="https://docs.conda.io/en/latest/miniconda.html" alt="package management"> <img src="https://img.shields.io/badge/conda-env-green" /> </a> <a style="text-decoration:none !important;" href="https://opensource.org/licenses/MIT" alt="License"> <img src="https://img.shields.io/badge/license-MIT-blue.svg" /> </a> </p>

A platform for image-to-image regression with rigorous, distribution-free uncertainty quantification.

<div align="center"> <img src="./teaser-whitebackground.png" width="70%" /> <br/> <div align="center" width="65%"> <figcaption display="table-caption" width="70%"><b>An algorithmic MRI reconstruction with uncertainty.</bf> A rapidly acquired but undersampled MR image of a knee (A) is fed into a model that predicts a sharp reconstruction (B) along with a calibrated notion of uncertainty (C). In (C), red means high uncertainty and blue means low uncertainty. Wherever the reconstruction contains hallucinations, the uncertainty is high; see the hallucination in the image patch (E), which has high uncertainty in (F), and does not exist in the ground truth (G).</figcaption> </div> </div>

Summary

This repository provides a convenient way to train deep-learning models in PyTorch for image-to-image regression---any task where the input and output are both images---along with rigorous uncertainty quantification. The uncertainty quantification takes the form of an interval for each pixel which is guaranteed to contain most true pixel values with high-probability no matter the choice of model or the dataset used (it is a risk-controlling prediction set). The training pipeline is already built to handle more than one GPU and all training/calibration should run automatically.

The basic workflow is

Following this procedure will train one or more models (depending on config.yml) that perform image-to-image regression with rigorous uncertainty quantification.

There are two pre-baked examples that you can run on your own after downloading the open-source data: experiments/fastmri_test/config.yml and experiments/temca_test/config.yml. The third pre-baked example, experiments/bsbcm_test/config.yml, reiles on data collected at Berkeley that has not yet been publicly released (but will be soon).

Paper

Image-to-Image Regression with Distribution-Free Uncertainty Quantification and Applications in Imaging

@article{angelopoulos2022image,
  title={Image-to-Image Regression with Distribution-Free Uncertainty Quantification and Applications in Imaging},
  author={Angelopoulos, Anastasios N and Kohli, Amit P and Bates, Stephen and Jordan, Michael I and Malik, Jitendra and Alshaabi, Thayer and Upadhyayula, Srigokul and Romano, Yaniv},
  journal={arXiv preprint arXiv:2202.05265},
  year={2022}
}

Installation

You will need to execute

conda env create -f environment.yml
conda activate im2im-uq

You will also need to go through the Weights and Biases setup process that initiates when you run your first sweep. You may need to make an account on their website.

Reproducing the results

FastMRI dataset

TEMCA2 dataset

Adding a new experiment

If you want to extend this code to a new experiment, you will need to write some code compatible with our infrastructure. If adding a new dataset, you will need to write a valid PyTorch dataset object; you need to add a new model architecture, you will need to specify it; and so on. Usually, you will want to start by creating a folder experiments/new_experiment along with a config file experiments/new_experiment/config.yml. The easiest way is to start from an existing config, like experiments/fastmri_test/config.yml.

Adding new datasets

To add a new dataset, use the following procedure.

Adding new models

In our system, there are two parts to a model---the base architecture, which we call a trunk (e.g. a U-Net), and the final layer. Defining a trunk is as simple as writing a regular PyTorch nn.module and adding it near Line 87 of core/scripts/router.py (you will also need to import it); see core/models/trunks/unet.py for an example.

The process for adding a final layer is a bit more involved. The final layer is simply a Pytorch nn.module, but it also must come with two functions: a loss function and a nested prediction set function. See core/models/finallayers/quantile_layer.py for an example. The steps are: