Home

Awesome

IMPLEMENTATION OF MODEL-BLIND VIDEO DENOISING VIA FRAME-TO-FRAME TRAINING

OVERVIEW

This code is provided to reproduce the results from "Model-blind Video Denoising Via Frame-to-frame Training , T. Ehret, A. Davy, J.M. Morel, G. Facciolo, P. Arias, CVPR 2019". Please cite it if you use this code as part of your research.

The sequences used for the article can be found in https://github.com/cmla/derf-hd-dataset.

USAGE

List all available options:</br> python blind_denoising.py --help

There are 4 mandatory input arguments:

There are 5 optional input arguments:

The input sequence provided should already be a degraded grayscale sequence (the code can read grayscale png, jpeg and tiff files). The optical flow should be computed before running the denoising code (and preferably on the degraded sequence).

OPTICAL FLOW

The code used to compute the optical flow for the CVPR paper is provided in the tvl1flow folder. It's a modified version of "Javier Sánchez Pérez, Enric Meinhardt-Llopis, and Gabriele Facciolo, TV-L1 Optical Flow Estimation, Image Processing On Line, 3 (2013), pp. 137–150."

The code is compilable on Unix/Linux and hopefully on Mac OS (not tested!).

Compilation: requires the cmake and make programs.

Compile the source code using make.

UNIX/LINUX/MAC:

$ mkdir build; cd build
$ cmake ..
$ make

Binaries will be created in the build/ folder.

NOTE: By default, the code is compiled with OpenMP multithreaded parallelization enabled (if your system supports it).

A script tvl1flow.sh is also provided. The command to run this script is (assuming it is in the same folder as the tvl1flow binary):

$ ./tvl1flow.sh inputPath first last outputPath

where the mandatory inputs are: