Home

Awesome

Pixel Adaptive Deep Unfolding Transformer for Hyperspectral Image Reconstruction

<div align="center"> <a target='_blank'> Miaoyu Li <sup>1</sup> </a>&emsp; <a href='https://ying-fu.github.io/' target='_blank'>Ying Fu<sup>1</a>&emsp; <a target='_blank'> Ji Liu <sup>2</sup> </a>&emsp; <a href='http://yulunzhang.com/' target='_blank'>Yulun Zhang <sup>3</sup></a>&emsp; <br> <div > <sup>1</sup> Beijing Institute of Technology &emsp; <sup>2</sup> Baidu Inc. &emsp; <sup>3</sup> ETH Zurich &emsp; </div> <br> <i><strong><a target='_blank'>ICCV 2023</a></strong></i> <br> <br> </div>

Arxiv

1. Comparison with State-of-the-art Methods

MethodParams (M)FLOPS (G)PSNRSSIMModel ZooResult
DAUHST-L6.1579.5038.360.967RepoRepo
PADUT-3stg1.3522.9136.950.962Google DriverGoogle Driver
PADUT-5stg2.2437.9037.840.967Google DriverGoogle Driver
PADUT-7stg3.1452.9038.410.971Google DriverGoogle Driver
PADUT-12stg5.3890.4638.890.972Google DriverGoogle Driver

2. Create Environment

pip install -r requirements.txt

3. Data Preparation

Download cave_1024_28 (Baidu Disk, code: fo0q | One Drive), CAVE_512_28 (Baidu Disk, code: ixoe | One Drive), KAIST_CVPR2021 (Baidu Disk, code: 5mmn | One Drive), TSA_simu_data (Baidu Disk, code: efu8 | One Drive), TSA_real_data (Baidu Disk, code: eaqe | One Drive), and then put them into the corresponding folders of datasets/ and recollect them as the following form:


    |--real
    	|-- test_code
    	|-- train_code
    |--simulation
    	|-- test_code
    	|-- train_code
    |--datasets
        |--cave_1024_28
            |--scene1.mat
            |--scene2.mat
            :  
            |--scene205.mat
        |--CAVE_512_28
            |--scene1.mat
            |--scene2.mat
            :  
            |--scene30.mat
        |--KAIST_CVPR2021  
            |--1.mat
            |--2.mat
            : 
            |--30.mat
        |--TSA_simu_data  
            |--mask.mat   
            |--Truth
                |--scene01.mat
                |--scene02.mat
                : 
                |--scene10.mat
        |--TSA_real_data  
            |--mask.mat   
            |--Measurements
                |--scene1.mat
                |--scene2.mat
                : 
                |--scene5.mat

Following TSA-Net and DGSMP, we use the CAVE dataset (cave_1024_28) as the simulation training set. Both the CAVE (CAVE_512_28) and KAIST (KAIST_CVPR2021) datasets are used as the real training set.

4. Simulation Experiement

4.1 Training

cd simulation
python train.py --template dauhst --outf ./exp/padut_3stg/ --method padut_3

python train.py --template dauhst --outf ./exp/padut_5stg/ --method padut_5

python train.py --template dauhst --outf ./exp/padut_7stg/ --method padut_7

python train.py --template dauhst --outf ./exp/padut_12stg/ --method padut_12

4.2 Testing

python test.py --template dauhst --outf ./exp/padut_3stg/ --method padut_3 --pretrained_model_path ./checkpoints/3.pth

python test.py --template dauhst --outf ./exp/padut_5stg/ --method padut_5 --pretrained_model_path ./checkpoints/5.pth

python test.py --template dauhst --outf ./exp/padut_7stg/ --method padut_7 --pretrained_model_path ./checkpoints/7.pth

python test.py --template dauhst --outf ./exp/padut_12stg/ --method padut_12 --pretrained_model_path ./checkpoints/12.pth

5. Real Experiement

5.1 Training

python train.py  --template dauhst --outf ./exp/padut_3stg/ --method padut_3  

5.2 Testing

python test.py  --template dauhst --outf ./exp/padut_3stg/ --method padut_3    --pretrained_model_path ./checkpoints/3.pth

6. Acknowledgements

This code repository's implementation is based on these two works:


@inproceedings{mst,
  title={Mask-guided Spectral-wise Transformer for Efficient Hyperspectral Image Reconstruction},
  author={Yuanhao Cai and Jing Lin and Xiaowan Hu and Haoqian Wang and Xin Yuan and Yulun Zhang and Radu Timofte and Luc Van Gool},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2022}
}
@inproceedings{res,
  title={Residual Degradation Learning Unfolding Framework with Mixing Priors across Spectral and Spatial for Compressive Spectral Imaging},
  author={Yubo Dong and Dahua Gao and Tian Qiu and Yuyan Li and Minxi Yang and Guangming Shi},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2023}
}

Citation

@inproceedings{PADUT,
  title={Pixel Adaptive Deep Unfolding Transformer for Hyperspectral Image Reconstruction},
  author={Miaoyu Li and Ying fu and Ji Liu and Yulun Zhang},
  booktitle={Proceedings of the IEEE International Conference on Computer Vision(ICCV)},
  year={2023}
}