Home

Awesome

MM-BSN: Self-Supervised Image Denoising for Real-World with Mutil-Mask based on Blind-Spot Network

MM-BSN(arxiv has been accepted by CVPRw 2023.

masks

Abstract

Recent advances in deep learning have been pushing image denoising techniques to a new level. In self-supervised image denoising, blind-spot network (BSN) is one of the most common methods. However, most of the existing BSN algorithms use a dot-based central mask, which is recognized as inefficient for images with large-scale spatially correlated noise. In this paper, we give the definition of large-noise and propose a multi-mask strategy using multiple convolutional kernels masked in different shapes to further break the noise spatial correlation. Furthermore, we propose a novel self-supervised image denoising method that combines the multi-mask strategy with BSN (MM-BSN). We show that different masks can cause significant performance differences, and the proposed MM-BSN can efficiently fuse the features extracted by multi-masked layers, while recovering the texture structures destroyed by multi-masking and information transmission. Our MM-BSN can be used to address the problem of large-noise denoising, which cannot be efficiently handled by other BSN methods. Extensive experiments on public real-world datasets demonstrate that the proposed MM-BSN achieves state-of-the-art performance among self-supervised and even unpaired image denoising methods for sRGB images denoising, without any labelling effort or prior knowledge.

Parameters

ModelsSIDD ValidationParameters
AP-BSN35.91/0.8703.7M
MM-BSN37.38/0.8825.3M

Setup

Requirements

Our experiments are done with:

Directory

Follow below descriptions to build code directory.

AP-BSN
├─ ckpt
├─ config
├─ DataDeal
├─ dataset
│  ├─ DND
│  ├─ SIDD
│  ├─ prep
│  ├─ test_data
├─ figs  
├─ model
├─ output
├─ util

Quick test

To test noisy images with pre-trained MM-BSN in gpu:0.

python test.py -c SIDD -g 0 --pretrained ./ckpt/SIDD_MMBSN_o_a45.pth --test_dir ./dataset/test_data

Training

usage: python train.py [-c CONFIG_NAME] [-g GPU_NUM] 
                       [-r RESUME] [-p PRETRAINED_CKPT] 
                       [-t THREAD_NUM] [-se SELF_ENSEMBLE]
                       [-sd OUTPUT_SAVE_DIR] [-rd DATA_ROOT_DIR]

Train model.

part of Arguments in config SIDD.yaml:  
   model:
  kwargs:
    type: MMBSN    # basic model types, eg.MMBSN, CSCBSN
    pd_a: 5
    pd_b: 2
    pd_pad: 2
    R3: True
    R3_T: 8
    R3_p: 0.16
    in_ch: 3
    bsn_base_ch: 128
    bsn_num_module: 9
    DCL1_num: 2
    DCL2_num: 7
    mask_type: 'o_fsz'  # mask types, eg. 'o_a45' means the combination of 'o' and 'a45'
    

Also, you can control other detail experimental configurations (e.g. training loss, epoch, batch_size, etc.) in each of config file.

Evaluations

masks

Acknowledgement

Part of our codes are adapted from AP-BSN and we are expressing gratitude for their work sharing.

Cite

@InProceedings{Zhang_2023_CVPR, author = {Zhang, Dan and Zhou, Fangfang and Jiang, Yuwen and Fu, Zhengming}, title = {MM-BSN: Self-Supervised Image Denoising for Real-World With Multi-Mask Based on Blind-Spot Network}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {4188-4197} }