Home

Awesome

<h1 align="center">Txt2Img-MHN: Remote Sensing Image Generation from Text Using Modern Hopfield Networks</h1> <h3 align="center"> <a href="https://yonghaoxu.github.io">Yonghao Xu</a>, <a href="https://scholar.google.com/citations?user=MjsztHYAAAAJ&hl=en">Weikang Yu</a>, <a href="https://www.ai4rs.com">Pedram Ghamisi</a>, <a href="https://www.iarai.ac.at/people/michaelkopp">Michael Kopp</a>, and <a href="https://www.iarai.ac.at/people/sepphochreiter">Sepp Hochreiter</a></h3> <br

This is the official PyTorch implementation of the paper Txt2Img-MHN: Remote Sensing Image Generation from Text Using Modern Hopfield Networks.

Table of content

  1. Preparation
  2. Training VQVAE and VQGAN
  3. Training Txt2Img-MHN
  4. Image Generation
  5. Inception Score and FID Score
  6. CLIP Score
  7. Zero-Shot Classification
  8. Paper
  9. Acknowledgement
  10. License

Preparation

├── RSICD/
│   ├── airport_1.jpg   
│   ├── airport_2.jpg  
│   ├── ...  
│   ├── viaduct_420.jpg  
│   ├── airport_1.txt   
│   ├── airport_2.txt   
│   ├── ...  
│   ├── viaduct_420.txt   

Training VQVAE and VQGAN <a name="vae"></a>

$ cd Txt2Img-MHN-main
$ CUDA_VISIBLE_DEVICES=0 python train_vqvae.py --data_dir /Path/To/RSICD/
$ cd taming-transformers-master
$ CUDA_VISIBLE_DEVICES=0 python main.py --base configs/custom_vqgan.yaml -t True --gpus 0,

Training Txt2Img-MHN <a name="mhn"></a>

$ cd Txt2Img-MHN-main
$ CUDA_VISIBLE_DEVICES=0 python train_txt2img_mhn.py --vae_type 0 --data_dir /Path/To/RSICD/ --vqvae_path /Path/To/vae.pth --batch_size 8
$ cd Txt2Img-MHN-main
$ CUDA_VISIBLE_DEVICES=0 python train_txt2img_mhn.py --vae_type 1 --data_dir /Path/To/RSICD/ --vqgan_model_path /Path/To/last.ckpt --vqgan_config_path /Path/To/project.yaml --batch_size 8

Note: Training with multiple GPUs is supported. Simply specify the GPU ids with CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7,...

$ cd Txt2Img-MHN-main
$ tensorboard --logdir ./ --samples_per_plugin images=100

Image Generation <a name="gen"></a>

$ cd Txt2Img-MHN-main
$ CUDA_VISIBLE_DEVICES=0 python gen_im.py --vae_type 0 --data_dir /Path/To/RSICD/ --vqvae_path /Path/To/vae.pth --mhn_vqvae_path /Path/To/mhn_vqvae.pth --num_gen_per_image 10
$ cd Txt2Img-MHN-main
$ CUDA_VISIBLE_DEVICES=0 python gen_im.py --vae_type 1 --data_dir /Path/To/RSICD/ --vqgan_model_path /Path/To/last.ckpt --vqgan_config_path /Path/To/project.yaml  --mhn_vqgan_path /Path/To/mhn_vqgan.pth --num_gen_per_image 10

Alternatively, you can download our pretrained models for a quick look.

Inception Score and FID Score <a name="is"></a>

├── RSICD_cls/
│   ├── airport/     
|   |   ├── airport_1.jpg   
|   |   ├── airport_2.jpg   
|   |   ├── ... 
│   ├── bareland/     
|   |   ├── bareland_1.jpg   
|   |   ├── bareland_2.jpg   
|   |   ├── ... 
│   ├── ...  
│   ├── viaduct/     
|   |   ├── viaduct_1.jpg   
|   |   ├── viaduct_2.jpg   
|   |   ├── ...   
$ cd Txt2Img-MHN-main/is_fid_score
$ CUDA_VISIBLE_DEVICES=0 python pretrain_inception.py --root_dir /Path/To/RSICD_cls/
$ cd Txt2Img-MHN-main/is_fid_score
$ CUDA_VISIBLE_DEVICES=0 python is_fid_score.py --gen_dir /Path/To/GenImgFolder/ --data_dir /Path/To/RSICD/

CLIP Score <a name="clip"></a>

$ cd Txt2Img-MHN-main
$ CUDA_VISIBLE_DEVICES=0 python clip_score.py --gen_dir /Path/To/GenImgFolder/ --data_dir /Path/To/RSICD/

Zero-Shot Classification <a name="cls"></a>

$ cd Txt2Img-MHN-main/zero_shot_classification
$ CUDA_VISIBLE_DEVICES=0 python zero_shot_evaluation.py --gen_dir /Path/To/GenImgFolder/ --root_dir /Path/To/RSICD/

Paper

Txt2Img-MHN: Remote Sensing Image Generation from Text Using Modern Hopfield Networks

Please cite the following paper if you find it useful for your research:

@article{txt2img_mhn,
  title={Txt2Img-MHN: Remote Sensing Image Generation from Text Using Modern Hopfield Networks},
  author={Xu, Yonghao and Yu, Weikang and Ghamisi, Pedram and Kopp, Michael and Hochreiter, Sepp},
  journal={IEEE Trans. Image Process.}, 
  doi={10.1109/TIP.2023.3323799},
  year={2023}
}

Acknowledgement

DALLE-pytorch

taming-transformers

metrics

CLIP-rsicd

This research has been conducted at the Institute of Advanced Research in Artificial Intelligence (IARAI).

License

This repo is distributed under MIT License. The code can be used for academic purposes only.