Awesome
Contents
- Local and Global GAN
- Cross-View Image Translation
- Semantic Image Synthesis
- Acknowledgments
- Related Projects
- Citation
- Contributions
- Collaborations
Local and Global GAN
Local Class-Specific and Global Image-Level Generative Adversarial Networks for Semantic-Guided Scene Generation
Hao Tang, Dan Xu, Yan Yan, Philip H.S. Torr, Nicu Sebe.
<br>In CVPR 2020.<br>
The repository offers the official implementation of our paper in PyTorch.
In the meantime, check out our related ACM MM 2020 paper Dual Attention GANs for Semantic Image Synthesis, and TIP 2021 paper Layout-to-Image Translation with Double Pooling Generative Adversarial Networks.
Framework
<img src='./imgs/framework.jpg' width=1200>Cross-View Image Translation Results on Dayton and CVUSA
<center> <img src='./imgs/cross_view_results.jpg' width=600> </center>Semantic Image Synthesis Results on Cityscapes and ADE20K
<img src='./imgs/semantic_results.jpg' width=1200>Generated Segmentation Maps on Cityscapes
<img src='./imgs/seg_city.jpg' width=1200>Generated Segmentation Maps on ADE20K
<img src='./imgs/seg_ade20k.jpg' width=1200>Generated Feature Maps on Cityscapes
<img src='./imgs/feature_map.jpg' width=1200>License
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /> Copyright (C) 2020 University of Trento, Italy.
All rights reserved. Licensed under the CC BY-NC-SA 4.0 (Attribution-NonCommercial-ShareAlike 4.0 International)
The code is released for academic research use only. For commercial use, please contact bjdxtanghao@gmail.com.
Cross-View Image Translation
Please refer to the cross_view_translation folder for more details.
Semantic Image Synthesis
Please refer to the semantic_image_synthesis folder for more details.
Acknowledgments
This source code of cross-view image translation is inspired by SelectionGAN, the source code of semantic image synthsis is inspired by GauGAN/SPADE.
Related Projects
SelectionGAN | ECGAN | DPGAN | DAGAN | PanoGAN | Guided-I2I-Translation-Papers
Citation
If you use this code for your research, please cite our papers.
LGGAN
@article{tang2022local,
title={Local and Global GANs with Semantic-Aware Upsampling for Image Generation},
author={Tang, Hao and Shao, Ling and Torr, Philip HS and Sebe, Nicu},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},
year={2022}
}
@inproceedings{tang2019local,
title={Local Class-Specific and Global Image-Level Generative Adversarial Networks for Semantic-Guided Scene Generation},
author={Tang, Hao and Xu, Dan and Yan, Yan and Torr, Philip HS and Sebe, Nicu},
booktitle={CVPR},
year={2020}
}
SelectionGAN
@article{tang2022multi,
title={Multi-Channel Attention Selection GANs for Guided Image-to-Image Translation},
author={Tang, Hao and Torr, Philip HS and Sebe, Nicu},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},
year={2022}
}
@inproceedings{tang2019multi,
title={Multi-channel attention selection gan with cascaded semantic guidance for cross-view image translation},
author={Tang, Hao and Xu, Dan and Sebe, Nicu and Wang, Yanzhi and Corso, Jason J and Yan, Yan},
booktitle={CVPR},
year={2019}
}
ECGAN
@article{tang2023edge,
title={Edge Guided GANs with Contrastive Learning for Semantic Image Synthesis},
author={Tang, Hao and Qi, Xiaojuan and Sun, Guolei, and Xu, Dan and and Sebe, Nicu and Timofte, Radu and Van Gool, Luc},
journal={ICLR},
year={2023}
}
DPGAN
@article{tang2021layout,
title={Layout-to-image translation with double pooling generative adversarial networks},
author={Tang, Hao and Sebe, Nicu},
journal={IEEE Transactions on Image Processing (TIP)},
volume={30},
pages={7903--7913},
year={2021}
}
DAGAN
@inproceedings{tang2020dual,
title={Dual Attention GANs for Semantic Image Synthesis},
author={Tang, Hao and Bai, Song and Sebe, Nicu},
booktitle ={ACM MM},
year={2020}
}
PanoGAN
@article{wu2022cross,
title={Cross-View Panorama Image Synthesis},
author={Wu, Songsong and Tang, Hao and Jing, Xiao-Yuan and Zhao, Haifeng and Qian, Jianjun and Sebe, Nicu and Yan, Yan},
journal={IEEE Transactions on Multimedia (TMM)},
year={2022}
}
Contributions
If you have any questions/comments/bug reports, feel free to open a github issue or pull a request or e-mail to the author Hao Tang (bjdxtanghao@gmail.com).
Collaborations
I'm always interested in meeting new people and hearing about potential collaborations. If you'd like to work together or get in contact with me, please email bjdxtanghao@gmail.com. Some of our projects are listed here.
If you really want to do something, you'll find a way. If you don't, you'll find an excuse.