Awesome
HyperTransformer: A Textural and Spectral Feature Fusion Transformer for Pansharpening (CVPR'22)
Wele Gedara Chaminda Bandara
, and Vishal M. Patel
For more information, please see our
- Paper:
CVPR-2022-Open-Access
orarxiv
. - Poster:
view here
- Video Presentation:
view here
- Presentation Slides:
download here
Summary
<p align="center"> <img src="/imgs/poster.jpg" /> </p>Setting up a virtual conda environment
Setup a virtual conda environment using the provided environment.yml
file or requirements.txt
.
conda env create --name HyperTransformer --file environment.yaml
conda activate HyperTransformer
or
conda create --name HyperTransformer --file requirements.txt
conda activate HyperTransformer
Download datasets
We use three publically available HSI datasets for experiments, namely
- Pavia Center scene
Download the .mat file here
, and save it in "./datasets/pavia_centre/Pavia_centre.mat". - Botswana dataset
Download the .mat file here
, and save it in "./datasets/botswana4/Botswana.mat". - Chikusei dataset
Download the .mat file here
, and save it in "./datasets/chikusei/chikusei.mat".
Processing the datasets to generate LR-HSI, PAN, and Reference-HR-HSI using Wald's protocol
We use Wald's protocol to generate LR-HSI and PAN image. To generate those cubic patches,
- Run
process_pavia.m
in./datasets/pavia_centre/
to generate cubic patches. - Run
process_botswana.m
in./datasets/botswana4/
to generate cubic patches. - Run
process_chikusei.m
in./datasets/chikusei/
to generate cubic patches.
Training HyperTransformer
We use two stage procedure to train our HyperTransformer.
We first train the backbone of HyperTrasnformer and then fine-tune the MHFA modules. This way we get better results and faster convergence instead of training whole network at once.
Training the Backbone of HyperTrasnformer
Use the following codes to pre-train HyperTransformer on the three datasets.
-
Pre-training on Pavia Center Dataset:
Change "train_dataset" to "pavia_dataset" in config_HSIT_PRE.json.
Then use following commad to pre-train on Pavia Center dataset.
python train.py --config configs/config_HSIT_PRE.json
. -
Pre-training on Botswana Dataset: Change "train_dataset" to "botswana4_dataset" in config_HSIT_PRE.json.
Then use following commad to pre-train on Pavia Center dataset.
python train.py --config configs/config_HSIT_PRE.json
. -
Pre-training on Chikusei Dataset:
Change "train_dataset" to "chikusei_dataset" in config_HSIT_PRE.json.
Then use following commad to pre-train on Pavia Center dataset.
python train.py --config configs/config_HSIT_PRE.json
.
Fine-tuning the MHFA modules in HyperTrasnformer
Next, we fine-tune the MHFA modules in HyperTransformer starting from pre-trained backbone from the previous step.
-
Fine-tuning MHFA on Pavia Center Dataset:
Change "train_dataset" to "pavia_dataset" in config_HSIT.json.
Then use the following commad to train HyperTransformer on Pavia Center dataset.
Please specify path to best model obtained from previous step using --resume.
python train.py --config configs/config_HSIT.json --resume ./Experiments/HSIT_PRE/pavia_dataset/N_modules\(4\)/best_model.pth
. -
Fine-tuning on Botswana Dataset:
Change "train_dataset" to "botswana4_dataset" in config_HSIT.json.
Then use following commad to pre-train on Pavia Center dataset.
python train.py --config configs/config_HSIT.json --resume ./Experiments/HSIT_PRE/botswana4/N_modules\(4\)/best_model.pth
. -
Fine-tuning on Chikusei Dataset:
Change "train_dataset" to "chikusei_dataset" in config_HSIT.json.
Then use following commad to pre-train on Pavia Center dataset.
python train.py --config configs/config_HSIT.json --resume ./Experiments/HSIT_PRE/chikusei_dataset/N_modules\(4\)/best_model.pth
.
Trained models and pansharpened results on test-set
You can download trained models and final prediction outputs through the follwing links for each dataset.
- Pavia Center:
Download here
- Botswana:
Download here
- Chikusei:
Download here
Citation
If you find our work useful, please consider citing our paper.
@InProceedings{Bandara_2022_CVPR,
author = {Bandara, Wele Gedara Chaminda and Patel, Vishal M.},
title = {HyperTransformer: A Textural and Spectral Feature Fusion Transformer for Pansharpening},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {1767-1777}
}