Awesome
CRS-Diff: Controllable Generative Remote Sensing Foundation Model
Paper (ArXiv)
<div align=center> <img src="img/figure_1.png" height="100%" width="100%"/> </div>TODO
- Release inference code.
- Release pretrained models.
- Release Gradio UI.
- Release training code
Environment
conda env create -f environment.yaml
conda activate csrldm
You can download pre-trained models last.ckpt and put it to ./ckpt/
folder.
Testing
You can run the code to start the gradio interface by:
python src/test/test.py
The demonstration effects of the project are as follows:
<div align=center> <img src="img/figure_2.png" height="100%" width="100%"/> </div>You can also use the following code to generate images more quickly
python src/test/inference.py
Some of the results are shown below:
<div align=center> <img src="img/figure_3.png" height="100%" width="100%"/> </div>Acknowledgments:
This repo is built upon ControlNet and Uni-ControlNet. Some of the functional implementations of remote sensing imagery refer to: GeoSeg,Txt2Img-MHN and SGCN. Sincere thanks to their excellent work!
Citation
@misc{tang2024crsdiff,
title={CRS-Diff: Controllable Generative Remote Sensing Foundation Model},
author={Datao Tang and Xiangyong Cao and Xingsong Hou and Zhongyuan Jiang and Deyu Meng},
year={2024},
eprint={2403.11614},
archivePrefix={arXiv},
primaryClass={cs.CV}
}