Awesome
DivNoising: Diversity Denoising with Fully Convolutional Variational Autoencoders
Mangal Prakash<sup>1</sup>, Alexander Krull<sup>1,2</sup>, Florian Jug<sup>2</sup></br> <sup>1</sup>Authors contributed equally, <sup>2</sup>Shared last authors. <br> Max Planck Institute of Molecular Cell Biology and Genetics (MPI-CBG) <br> Center for Systems Biology (CSBD) in Dresden, Germany .
Deep Learning based methods have emerged as the indisputable leaders for virtually all image restoration tasks. Especially in the domain of microscopy images, various content-aware image restoration (CARE) approaches are now used to improve the interpretability of acquired data. But there are limitations to what can be restored in corrupted images, and any given method needs to make a sensible compromise between many possible clean signals when predicting a restored image. Here, we propose DivNoising - a denoising approach based on fully-convolutional variational autoencoders, overcoming this problem by predicting a whole distribution of denoised images. Our method is unsupervised, requiring only noisy images and a description of the imaging noise, which can be measured or bootstrapped from noisy data. If desired, consensus predictions can be inferred from a set of DivNoising predictions, leading to competitive results with other unsupervised methods and, on occasion, even with the supervised state-of-the-art. DivNoising samples from the posterior enable a plethora of useful applications. We are (i) discussing how optical character recognition (OCR) applications could benefit from diverse predictions on ambiguous data, and (ii) show in detail how instance cell segmentation gains performance when using diverse DivNoising predictions.
Information
This repository hosts the the code for the publication Fully Unsupervised Diversity Denoising with Convolutional Variational Autoencoders.
Citation
If you find our work useful in your research, please consider citing:
@inproceedings{
prakash2021fully,
title={Fully Unsupervised Diversity Denoising with Convolutional Variational Autoencoders},
author={Mangal Prakash and Alexander Krull and Florian Jug},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=agHLCOBM5jP}
}
One simple way to install DivNoising
Note: if you do not mind pip, use the next, truly simple way of installing DivNoising.
conda create -n divnoising python=3.7
conda activate divnoising
conda install pytorch torchvision torchaudio cudatoolkit=11.6 -c pytorch -c conda-forge
conda install pytorch-lightning==1.2.10 -c conda-forge
conda install nb_conda tifffile matplotlib scipy scikit-learn
<!-- -->
A truly simple, pip-based way to install DivNoising
conda create -n divnoising python=3.7
conda activate divnoising
conda install nb_conda tifffile matplotlib scipy scikit-learn
pip install pytorch-lightning==1.2.10
And another way that seems much less clean, but we fear to remove it (quite yet)...
We have tested this implementation using pytorch version 1.1.0 and cudatoolkit version 9.0. (Obviously we also tested it, by now, with cudatoolkit=11.6, as you can see above.)<br>
Follow the steps below to setup DivNoising. <br>
(i) Move to the command prompt and enter git clone https://github.com/juglab/DivNoising/
. <br>
(ii) Move to the folder where the repository was cloned by cd DivNoising
. <br>
(iii) Create a new conda environment by the command conda env create -f DivNoising.yml
. <br>
(iv) Activate the conda environemnt conda activate DivNoising
. <br>
(v) Install tensorboard with the command conda install -c conda-forge tensorboard
. <br>
(vi) Install jupyter with the command pip install -U jupyter protobuf
. <br>
(vii) Finally, execute the command pip install ipykernel
followed by the command python -m ipykernel install --user --name DivNoising --display-name 'DivNoising'
. <br>
You are all set to run DivNoising now.
Getting Started
Look in the examples
directory and try out the notebooks. Inside this directory, there are folders corresponding to different datasets.
If your data is real microscopy data with intrinsic noise (Convallaria and Mouse skull nuclei datasets in our case), then you will need a noise model which can be generated by first running the notebook: (i) 0-CreateNoiseModel.ipynb
. This will create a suitable noise model. Next run (ii) 1-Training.ipynb
. This starts network training. Following this, run (iii) 2-Prediction.ipynb
which starts prediction part.
In case, your noisy data is generated by synthetic corruption with Gaussian noise, then you can start with the training step directly by running 1-Training.ipynb
followed by 2-Prediction.ipynb
.
Remeber to select the kernel DivNoising
whenever you run any of the jupyter notebooks.
Minor note
This is the PyTorch Lightning version of DivNoising and gives equivalent results compared to the PyTorch version used for paper. The PyTorch version can still be accessed from the release v0.1 in this repository.