Home

Awesome

Synthetic2Realistic

This repository implements the training and testing of T2Net for "T2Net: Synthetic-to-Realistic Translation for Depth Estimation Tasks" by Chuanxia Zheng, Tat-Jen Cham and Jianfei Cai at NTU. A video is available on YouTube. The repository offers the implementation of the paper in Pytoch.

<img src='Image/image2depth_outdoor.gif' align="center"> <img src='Image/image2depth_syn2real_indoor.jpg' align="center"> <img src='Image/horse2zebra.png' align="center">

This repository can be used for training and testing of

Getting Started

Installation

This code was tested with Pytoch 0.4.0, CUDA 8.0, Python 3.6 and Ubuntu 16.04

pip install visdom dominate
git clone https://github.com/lyndonzheng/Synthetic2Realistic
cd Synthetic2Realistic

Datasets

The indoor Synthetic Dataset renders from SUNCG and indoor Realistic Dataset comes from NYUv2. The outdooe Synthetic Dataset is vKITTI and outdoor Realistic dataset is KITTI

Training

Warning: The input sizes need to be muliples of 64. The feature GAN model needs to be change for different scale

python train.py --name Outdoor_nyu_wsupervised --model wsupervised
--img_source_file /dataset/Image2Depth31_KITTI/trainA_SYN.txt
--img_target_file /dataset/Image2Depth31_KITTI/trainA.txt
--lab_source_file /dataset/Image2Depth31_KITTI/trainB_SYN.txt
--lab_target_file /dataset/Image2Depth31_KITTI/trainB.txt
--shuffle --flip --rotation

Testing

python test.py --name Outdoor_nyu_wsupervised --model test
--img_source_file /dataset/Image2Depth31_KITTI/testA_SYN80
--img_target_file /dataset/Image2Depth31_KITTI/testA

Estimation

python evaluation.py --split eigen --file_path ./datasplit/
--gt_path ''your path''/KITTI/raw_data_KITTI/
--predicted_depth_path ''your path''/result/KITTI/predicted_depth_vk
--garg_crop

Trained Models

The pretrained model for indoor scene weakly wsupervised.

The pretrained model for outdoor scene weakly wsupervised

Note: Since our orginal model in the paper trained on single-GPU, this pretrained model is for multi-GPU version.

Citation

If you use this code for your research, please cite our papers.

@inproceedings{zheng2018t2net,
  title={T2Net: Synthetic-to-Realistic Translation for Solving Single-Image Depth Estimation Tasks},
  author={Zheng, Chuanxia and Cham, Tat-Jen and Cai, Jianfei},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  pages={767--783},
  year={2018}
}

Acknowledgments

Code is inspired by Pytorch-CycleGAN