Home

Awesome

Restorer: Removing Multi-Degradation with All-Axis Attention and Prompt Guidance

<a href="https://arxiv.org/abs/2406.12587"><img src="https://img.shields.io/badge/arXiv-2406.12587-b31b1b.svg" height=22.5></a> <a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/License-MIT-yellow.svg" height=22.5></a>
<a href="https://arxiv.org/abs/2406.12587"> </a>

Code for the paper Restorer: Removing Multi-Degradation with All-Axis Attention and Prompt Guidance

Note: This code base is still not complete.

Paper

About this repo:

This repo hosts the implentation code for the paper "Restorer".

Introduction

There are many excellent solutions in image restoration. However, most methods require on training separate models to restore images with different types of degradation. Although existing all-in-one models effectively address multiple types of degradation simultaneously, their performance in real-world scenarios is still constrained by the task confusion problem. In this work, we attempt to address this issue by introducing Restorer, a novel Transformer-based all-in-one image restoration model. To effectively address the complex degradation present in real-world images, we propose All-Axis Attention (AAA), a novel attention mechanism that simultaneously models long-range dependencies across both spatial and channel dimensions, capturing potential correlations along all axes. Additionally, we introduce textual prompts in Restorer to incorporate explicit task priors, enabling the removal of specific degradation types based on user instructions. By iterating over these prompts, Restorer can handle composite degradation in real-world scenarios without requiring additional training. Based on these designs, Restorer with one set of parameters demonstrates state-of-the-art performance in multiple image restoration tasks compared to existing all-in-one and even single-task models. Additionally, Restorer is efficient during inference, suggesting the potential in real-world applications.

<div align=center> <img src="./\imgs\pipeline.png" width="1000"/> </div>

News 🚀

Quick Start

Install

# git clone this repository
git clone https://github.com/Talented-Q/Restorer.git
cd Restorer

# create new anaconda env
conda create -n Restorer python=3.7
conda activate Restorer 

# install packages
pip install -r requirements.txt

Datasets:

Dataset setting:

TaskDataset#Train#TestTest Dubname
DesnowingCSD50002000CSD
DerainingRAIN140050001400rain1400
DehazingOTS5000500SOTS
DenoisingSIDD50001280SIDD
DebluringGoPro21031111GoPro
DebluringRealBlur-R3758980RealBlur-R
Lowlight EnhancementLOL48515LOL

Train data:

Restorer is trained on a combination of images sampled from CSD, rain1400, OTS datasets (similar to TKL (CVPR 2022)), SIDD, GoPro, RealBlur-R, and LOL dubbed as "mixed training set", containing 26346 images.

Dataset format:

Please download and sample the corresponding number of datasets according to our dataset setting and arrange them in the following format. Download the val datasets in this link.

    Restorer
    ├── train 
    |   ├── input # Training  
    |   |   ├── <degradation_kind1>   
    |   |   |   ├── 1.png          
    |   |   |   └── ...    
    |   |   ├── <degradation_kind2> 
    |   |   └── ... 
    |   |
    |   ├── gt # Training  
    |   |   ├── <degradation_kind1>   
    |   |   |   ├── 1.png          
    |   |   |   └── ...    
    |   |   ├── <degradation_kind2> 
    |   |   └── ... 
    |
    ├── val      
    |   ├── <degradation_kind1>          
    |   |   ├── input         
    |   |   |   ├── 1.png          
    |   |   |   └── ...     
    |   |   |── gt         
    |   |   |   ├── 1.png          
    |   |   |   └── ...        
    |   ├── <degradation_kind2>    
    |   └── ... 

Training

Training the Restorer

python train.py --train_root=train_root --val_root=val_root

Evaluating

Testing the Restorer

Please download our checkpoint and put it in checkpoint path.

python evaluate.py --val_root=val_root --task=task --save=save --ckpt_path=checkpoint path

Performance

Comparison with unified image restoration methods:

<div align=center> <img src="./\imgs\Fig6.png" width="1000"/> </div>

Comparison with expert networks:

<div align=center> <img src="./\imgs\Fig12.png" width="1000"/> </div>

Real world test:

<div align=center> <img src="./\imgs\Fig8.png" width="1000"/> </div>

Composite degradation restoration:

####lowlight+blur:

<div align=center> <img src="./\imgs\Figc1.png" width="1000"/> </div>

####lowlight+noise:

<div align=center> <img src="./\imgs\Figc2.png" width="1000"/> </div>

Acknowledgements:

This code-base uses certain code-blocks and helper functions from Transweather, Syn2Real, Segformer, and ViT.

Citation:

@misc{mao2024restorerremovingmultidegradationallaxis,
      title={Restorer: Removing Multi-Degradation with All-Axis Attention and Prompt Guidance}, 
      author={Jiawei Mao and Juncheng Wu and Yuyin Zhou and Xuesong Yin and Yuanqi Chang},
      year={2024},
      eprint={2406.12587},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2406.12587}, 
}