Home

Awesome

<p align="center"> <img src="assets/logo.png" width="300"> </p>

News: MambaIRv2 is now available!

The updated MambaIRv2 is totally compatible with the previous MambaIR. We also keep the code of MambaIR in another branch for your possible revisiting.

Check our paper collection of Awesome Mamba in Low-Level Vision :hugs:.

[ECCV24] MambaIR: A Simple Baseline for Image Restoration with State-Space Model

MambaIRv2: Attentive State Space Restoration

[Paper] [Zhihu(ηŸ₯乎)]

Hang Guo*, Yong Guo* Yaohua Zha, Yulun Zhang Wenbo Li, Tao Dai, Shu-Tao Xia, Yawei Li

(*) equal contribution

Abstract: The Mamba-based image restoration backbones have recently demonstrated significant potential in balancing global reception and computational efficiency. However, the inherent causal modeling limitation of Mamba, where each token depends solely on its predecessors in the scanned sequence, restricts the full utilization of pixels across the image and thus presents new challenges in image restoration. In this work, we propose MambaIRv2, which equips Mamba with the non-causal modeling ability similar to ViTs to reach the attentive state space restoration model. Specifically, the proposed attentive state-space equation allows to attend beyond the scanned sequence and facilitate image unfolding with just one single scan. Moreover, we further introduce a semantic-guided neighboring mechanism to encourage interaction between distant but similar pixels. Extensive experiments show our MambaIRv2 outperforms SRFormer by even 0.35dB PSNR for lightweight SR even with 9.3% less parameters and suppresses HAT on classic SR by up to 0.29dB

<p align="center"> <img src="assets/pipeline.png" style="border-radius: 15px"> </p>

⭐If this work is helpful for you, please help star this repo. Thanks!πŸ€—

πŸ“‘ Contents

<a name="Real-SR"></a>πŸ” Real-world SR with MambaIR

<img src="assets/imgsli1.png" height="250"/> <img src="assets/imgsli2.png" height="250"/> <img src="assets/imgsli3.png" height="250"/>

<img src="assets/imgsli4.png" height="250"/> <img src="assets/imgsli5.png" height="250"/> <img src="assets/imgsli6.png" height="250"/>

<a name="visual_results"></a>:eyes:Visual Results On Classic Image SR

<p align="center"> <img width="800" src="assets/visual_results.png"> </p>

<a name="news"></a> πŸ†• News

<a name="todo"></a> β˜‘οΈ TODO

<a name="model_summary"></a> :page_with_curl: Model Summary

ModelTaskTest_datasetPSNRSSIMmodel_weightslog_files
MambaIR_SR2Classic SR x2Urban10034.150.9446linklink
MambaIR_SR3Classic SR x3Urban10029.930.8841linklink
MambaIR_SR4Classic SR x4Urban10027.680.8287linklink
MambaIR_light2Lightweight SR x2Urban10032.920.9356linklink
MambaIR_light3Lightweight SR x3Urban10029.000.8689linklink
MambaIR_light4Lightweight SR x4Urban10026.750.8051linklink
MambaIR_realDNReal image DenoisingSIDD39.890.960linklink
MambaIR_realSRReal-world SRRealSRSet--linklink
MambaIR_guassian15Guassian DenoisingUrban10035.17-linklink
MambaIR_guassian25Guassian DenoisingUrban10032.99-linklink
MambaIR_guassian50Guassian DenoisingUrban10030.07-linklink
MambaIR_JEPG10JPEG CARClassic530.270.8256linklink
MambaIR_JPEG30JPEG CARClassic533.740.8965linklink
MambaIR_JPEG40JPEG CARClassic534.530.9084linklink
--------------------------------------------
MambaIRv2_light2lightSR x2Urban10033.260.9378linklink
MambaIRv2_light3lightSR x3Urban10029.010.8689linklink
MambaIRv2_light4lightSR x4Urban10026.820.8079linklink
MambaIRv2_SR2classicSR x2Urban10034.490.9468linklink
MambaIRv2_SR3classicSR x3Urban10030.280.8905linklink
MambaIRv2_SR4classicSR x4Urban10027.890.8344linklink
MambaIRv2_guassian15Guassian DenoisingUrban10035.42-linklink
MambaIRv2_JPEG10JPEG CAR q10Classic530.370.8269linklink
MambaIRv2_JPEG30JPEG CAR q30Classic533.810.8970linklink
MambaIRv2_JPEG40JPEG CAR q40Classic534.640.9093linklink

<a name="results"></a> πŸ₯‡ Results with MambaIRv2

We achieve state-of-the-art performance on various image restoration tasks. Detailed results can be found in the paper.

<details> <summary>Evaluation on Classic SR (click to expand)</summary> <p align="center"> <img width="500" src="assets/classicSR.png"> </p> </details> <details> <summary>Evaluation on Lightweight SR (click to expand)</summary> <p align="center"> <img width="500" src="assets/lightSR.png"> </p> </details> <details> <summary>Evaluation on Gaussian Color Image Denoising (click to expand)</summary> <p align="center"> <img width="500" src="assets/gaussian_dn.png"> </p> </details> <details> <summary>Evaluation on JPEG CAR (click to expand)</summary> <p align="center"> <img width="500" src="assets/car.png"> </p> </details> <details> <summary>Evaluation on Effective Receptive Filed (click to expand)</summary> <p align="center"> <img width="600" src="assets/erf.png"> </p> </details>

<a name="installation"></a> :wrench: Installation

This codebase was tested with the following environment configurations. It may work with other versions.

The following give three possible solution to install the mamba-related libraries.

Previous installation

To use the selective scan with efficient hard-ware design, the mamba_ssm library is needed to install with the folllowing command.

pip install causal_conv1d==1.0.0
pip install mamba_ssm==1.0.1

One can also create a new anaconda environment, and then install necessary python libraries with this requirement.txt and the following command:

conda install --yes --file requirements.txt

Updated installation

One can also reproduce the conda environment with the following simple commands (cuda-11.7 is used, you can modify the yaml file for your cuda version):

cd ./MambaIR
conda env create -f environment.yaml
conda activate mambair

Backup installation

If you encounter difficulties installing causal_conv1d or mamba_ssm, e.g. the network can't link to github, it's recommended to use an offline whl install.

Datasets

The datasets used in our training and testing are orgnized as follows:

TaskTraining SetTesting SetVisual Results(v2)
image SRDIV2K (800 training images) + Flickr2K (2650 images) [complete dataset DF2K download]Set5 + Set14 + BSD100 + Urban100 + Manga109 [download]Google Drive
gaussian color image denoisingDIV2K (800 training images) + Flickr2K (2650 images) + BSD500 (400 training&testing images) + WED(4744 images) [complete dataset DFWB_RGB download]CBSD68 + Kodak24 + McMaster + Urban100 [download]Google Drive
real image denoisingSIDD (320 training images) [complete dataset SIDD download]SIDD + DND [download]Google Drive
grayscale JPEG compression artifact reductionDIV2K (800 training images) + Flickr2K (2650 images) + BSD500 (400 training&testing images) + WED(4744 images) [complete dataset DFWB_CAR download]Classic5 + LIVE1 [download]Google Drive

<a name="training"></a> :hourglass: Training

Train on SR

  1. Please download the corresponding training datasets and put them in the folder datasets/DF2K. Download the testing datasets and put them in the folder datasets/SR.

  2. Follow the instructions below to begin training our model.

# Claissc SR task (Base model as default), cropped input=64Γ—64, 8 GPUs, batch size=4 per GPU
python -m torch.distributed.launch --nproc_per_node=8 --master_port=1234 basicsr/train.py -opt options/train/mambairv2/train_MambaIRv2_SR_x2.yml --launcher pytorch
python -m torch.distributed.launch --nproc_per_node=8 --master_port=1234 basicsr/train.py -opt options/train/mambairv2/train_MambaIRv2_SR_x3.yml --launcher pytorch
python -m torch.distributed.launch --nproc_per_node=8 --master_port=1234 basicsr/train.py -opt options/train/mambairv2/train_MambaIRv2_SR_x4.yml --launcher pytorch

# for training our Small or Large model, use the following command
python -m torch.distributed.launch --nproc_per_node=8 --master_port=1234 basicsr/train.py -opt options/train/mambairv2/train_MambaIRv2_SRSmall_x4.yml --launcher pytorch
python -m torch.distributed.launch --nproc_per_node=8 --master_port=1234 basicsr/train.py -opt options/train/mambairv2/train_MambaIRv2_SRLarge_x4.yml --launcher pytorch
# Lightweight SR task, cropped input=64Γ—64, 2 GPUs, batch size=16 per GPU
python -m torch.distributed.launch --nproc_per_node=2 --master_port=1234 basicsr/train.py -opt options/train/mambairv2/train_MambaIRv2_lightSR_x2.yml --launcher pytorch
python -m torch.distributed.launch --nproc_per_node=2 --master_port=1234 basicsr/train.py -opt options/train/mambairv2/train_MambaIRv2_lightSR_x3.yml --launcher pytorch
python -m torch.distributed.launch --nproc_per_node=2 --master_port=1234 basicsr/train.py -opt options/train/mambairv2/train_MambaIRv2_lightSR_x4.yml --launcher pytorch
  1. Run the script then you can find the generated experimental logs in the folder experiments.

Train on Gaussian Color Image Denosing

  1. Download the corresponding training datasets here and put them in the folder ./datasets/DFWB_RGB. Download the testing datasets and put them in the folder ./datasets/ColorDN.

  2. Follow the instructions below to begin training:

# train on denosing15
python -m torch.distributed.launch --nproc_per_node=8 --master_port=2414 basicsr/train.py -opt options/train/mambairv2/train_MambaIRv2_ColorDN_level15.yml --launcher pytorch
  1. Run the script then you can find the generated experimental logs in the folder ./experiments.

Train on JPEG Compression Artifact Reduction

  1. Download the corresponding training datasets here and put them in the folder ./datasets/DFWB_CAR. Download the testing datasets and put them in the folder ./datasets/JPEG_CAR.

  2. Follow the instructions below to begin training:

# train on jpeg10
python -m torch.distributed.launch --nproc_per_node=8 --master_port=2414 basicsr/train.py -opt options/train/mambairv2/train_MambaIRv2_CAR_q10.yml --launcher pytorch

# train on jpeg30
python -m torch.distributed.launch --nproc_per_node=8 --master_port=2414 basicsr/train.py -opt options/train/mambairv2/train_MambaIRv2_CAR_q30.yml --launcher pytorch

# train on jpeg40
python -m torch.distributed.launch --nproc_per_node=8 --master_port=2414 basicsr/train.py -opt options/train/mambairv2/train_MambaIRv2_CAR_q40.yml --launcher pytorch
  1. Run the script then you can find the generated experimental logs in the folder ./experiments.

<a name="testing"></a> :smile: Testing

Test on SR

  1. Please download the corresponding testing datasets and put them in the folder datasets/SR. Download the corresponding models and put them in the folder experiments/pretrained.

  2. Follow the instructions below to begin testing our MambaIRv2 model.

# test for image SR (we use the Base model as default). 
python basicsr/test.py -opt options/test/mambairv2/test_MambaIRv2_SR_x2.yml
python basicsr/test.py -opt options/test/mambairv2/test_MambaIRv2_SR_x3.yml
python basicsr/test.py -opt options/test/mambairv2/test_MambaIRv2_SR_x4.yml

# if you want to test our Small or Large model, you can use the following command
python basicsr/test.py -opt options/test/mambairv2/test_MambaIRv2_SRSmall_x4.yml
python basicsr/test.py -opt options/test/mambairv2/test_MambaIRv2_SRLarge_x4.yml
# test for lightweight image SR. 
python basicsr/test.py -opt options/test/mambairv2/test_MambaIRv2_lightSR_x2.yml
python basicsr/test.py -opt options/test/mambairv2/test_MambaIRv2_lightSR_x3.yml
python basicsr/test.py -opt options/test/mambairv2/test_MambaIRv2_lightSR_x4.yml

Test on Gaussian Color Image Denoising

  1. Please download the corresponding testing datasets and put them in the folder datasets/ColorDN.

  2. Download the corresponding models and put them in the folder experiments/pretrained_models.

  3. Follow the instructions below to begin testing our model.

# test on denosing15
python basicsr/test.py -opt options/test/mambairv2/test_MambaIRv2_ColorDN_level15.yml

Test on JPEG Compression Artifact Reduction

  1. Please download the corresponding testing datasets and put them in the folder datasets/JPEG_CAR.

  2. Download the corresponding models and put them in the folder experiments/pretrained_models.

  3. Follow the instructions below to begin testing our model.

# test on jpeg10
python basicsr/test.py -opt options/test/mambairv2/test_MambaIRv2_CAR_q10.yml

# test on jpeg30
python basicsr/test.py -opt options/test/mambairv2/test_MambaIRv2_CAR_q30.yml

# test on jpeg40
python basicsr/test.py -opt options/test/mambairv2/test_MambaIRv2_CAR_q40.yml

<a name="cite"></a> πŸ₯° Citation

Please cite us if our work is useful for your research.

@inproceedings{guo2025mambair,
  title={MambaIR: A simple baseline for image restoration with state-space model},
  author={Guo, Hang and Li, Jinmin and Dai, Tao and Ouyang, Zhihao and Ren, Xudong and Xia, Shu-Tao},
  booktitle={European Conference on Computer Vision},
  pages={222--241},
  year={2024},
  organization={Springer}
}
@article{guo2024mambairv2,
  title={MambaIRv2: Attentive State Space Restoration},
  author={Guo, Hang and Guo, Yong and Zha, Yaohua and Zhang, Yulun and Li, Wenbo and Dai, Tao and Xia, Shu-Tao and Li, Yawei},
  journal={arXiv preprint arXiv:2411.15269},
  year={2024}
}

License

This project is released under the Apache 2.0 license.

Acknowledgement

This code is based on BasicSR, ART ,and VMamba. Thanks for their awesome work.

Contact

If you have any questions, feel free to approach me at cshguo@gmail.com