Home

Awesome

cudaDecon

DOI

GPU-accelerated 3D image deconvolution & affine transforms using CUDA.

Python bindings are also available at pycudadecon

Install

Precompiled binaries available for linux and windows at conda-forge (see GPU driver requirements below)

conda install -c conda-forge cudadecon

# or... to also install the python bindings
conda install -c conda-forge pycudadecon

Usage

# check that GPU is discovered
cudaDecon -Q

# Basic Usage
# 1. create an OTF from a PSF with "radialft"
radialft /path/to/psf.tif /path/to/otf_output.tif --nocleanup --fixorigin 10
# 2. run decon on a folder of tiffs:
# 'filename_pattern' is a string that must appear in the filename to be processed
cudaDecon $OPTIONS /folder/of/images filename_pattern /path/to/otf_output.tif

# see manual for all of the available arguments
cudaDecon --help

GPU requirements

This software requires a CUDA-compatible NVIDIA GPU. The libraries available on conda-forge have been compiled against different versions of the CUDA toolkit. The required CUDA libraries are bundled in the conda distributions so you don't need to install the CUDA toolkit separately. If desired, you can pick which version of CUDA you'd like based on your needs, but please note that different versions of the CUDA toolkit have different GPU driver requirements:

To specify a specific cudatoolkit version, install as follows (for instance, to use cudatoolkit=10.2)

conda install -c conda-forge cudadecon cudatoolkit=10.2
CUDALinux driverWin driver
10.2≥ 440.33≥ 441.22
11.0≥ 450.36.06≥ 451.22
11.1≥ 455.23≥ 456.38
11.2≥ 460.27.03≥ 460.82

If you run into trouble, feel free to open an issue and describe your setup.


Notes


Local build instructions

If you simply wish to use this package, it is best to just install the precompiled binaries from conda as described above

To build the source locally, you have two options:

1. Build using run_docker_build

With docker installed, use .scripts/run_docker_build.sh with one of the configs available in .ci_support, for instance:

CONFIG=linux_64_cuda_compiler_version10.2 .scripts/run_docker_build.sh

2. using cmake directly in a conda environment

Here we create a dedicated conda environment with all of the build dependencies installed, and then use cmake directly. This method is faster and creates an immediately useable binary (i.e. it is better for iteration if you're changing the source code), but requires that you set up build dependencies correctly.

  1. install miniconda

  2. install cudatoolkit (I haven't yet tried 10.2)

  3. (windows only) install build tools for VisualStudio 2017. For linux, all necessary build tools will be installed by conda.

  4. create a new conda environment with all of the dependencies installed

    conda config --add channels conda-forge
    conda create -n build -y cmake boost-cpp libtiff fftw ninja
    conda activate build  
    # you will need to reactivate the "build" environment each time you close the terminal
    
  5. create a new build directory inside of the top level cudaDecon folder

    mkdir build  # inside the cudaDecon folder
    cd build
    
  6. (windows only) Activate your build tools:

    "C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Auxiliary\Build\vcvars64.bat"
    
  7. Run cmake and compile with ninja on windows or make on linux.

    # windows
    cmake ../src -DCMAKE_BUILD_TYPE=Release -G "Ninja"
    ninja
    
    # linux
    cmake ../src -DCMAKE_BUILD_TYPE=Release
    make -j4
    

    note that you can specify the CUDA version to use by using the -DCUDA_TOOLKIT_ROOT_DIR flag

The binary will be written to cudaDecon\build\<platform>-<compiler>-release. If you change the source code, you can just rerun ninja or make and the binary will be updated.