Awesome
[CVPR2022] Unsupervised Homography Estimation with Coplanarity-Aware GAN
<h4 align="center">Mingbo Hong<sup>1,2</sup>, Yuhang Lu<sup>1,3</sup>, Nianjin Ye<sup>1</sup>, Chunyu Lin<sup>4</sup>, Qijun Zhao<sup>2</sup>, Shuaicheng Liu<sup>5,1</sup></center> <h4 align="center">1. Megvii Technology, 2. Sichuan University, 3. Univesity of South Carolina</center> <h4 align="center">4. Beijing Jiaotong University, 5. University of Electronic Science and Technology of China</center>This is the official implementation of HomoGAN, CVPR2022, [PDF]
Presentation Video:
Summary
<p align="center"> <img src=https://github.com/megvii-research/HomoGAN/blob/main/images/slide.png width="780px" height=430px"> </p>Pipeline
Dependencies
pip install -r requirements.txt
Download the Deep Homography Dataset
Please refer to Content-Aware Unsupervised Deep Homography Estimation..
- Download raw dataset
# GoogleDriver
https://drive.google.com/file/d/19d2ylBUPcMQBb_MNBBGl9rCAS7SU-oGm/view?usp=sharing
# BaiduYun
https://pan.baidu.com/s/1Dkmz4MEzMtBx-T7nG0ORqA (key: gvor)
-
Unzip the data to directory "./dataset"
-
Run "video2img.py"
Be sure to scale the image to (640, 360) since the point coordinate system is based on the (640, 360).
e.g. img = cv2.imresize(img, (640, 360))
Pre-trained model
The models provided below are the retrained version(with minor differences in quantitative results)
model | RE | LT | LL | SF | LF | Avg | Model |
---|---|---|---|---|---|---|---|
Pre-trained | 0.24 | 0.47 | 0.59 | 0.62 | 0.43 | 0.47 | Baidu Google |
Fine-tuning | 0.22 | 0.38 | 0.57 | 0.47 | 0.30 | 0.39 | Baidu Google |
How to test?
python evaluate.py --model_dir ./experiments/HomoGAN/ --restore_file xxx.pth
How to train?
You need to modify ./dataset/data_loader.py
slightly for your environment, and you can also refer to Content-Aware Unsupervised Deep Homography Estimation.
Pre-training:
1) set "pretrain_phase" in ./experiments/HomoGAN/params.json as True
2) python train.py --model_dir ./experiments/HomoGAN/
Fine-tuning:
1) set "pretrain_phase" in ./experiments/HomoGAN/params.json as False
2) python train.py --model_dir ./experiments/HomoGAN/ --restore_file xxx.pth
Citation
If you use this code or ideas from the paper for your research, please cite our paper:
@InProceedings{Hong_2022_CVPR,
author = {Hong, Mingbo and Lu, Yuhang and Ye, Nianjin and Lin, Chunyu and Zhao, Qijun and Liu, Shuaicheng},
title = {Unsupervised Homography Estimation With Coplanarity-Aware GAN},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {17663-17672}
}
Acknowledgments
In this project we use (parts of) the official implementations of the following works:
We thank the respective authors for open sourcing their methods.