Home

Awesome

Novel View Synthesis in TensorFlow

Descriptions

This project is a TensorFlow implementation of a simple novel view synthesis model, which aims to synthesize a target view with an arbitrary camera pose from a given source view and its camera pose. An illustration of the task is as follows.

<p align="center"> <img src="./asset/illustration.png" width="512"/> </p>

The model implemented in the repo is a simple conv-deconv network with skip connections. To allow you to focus on building your own model and see how well it can work, all the data loaders/downloading scripts, the training code, as well as the training and testing splits are well-configured based on the setting used in this paper: Multi-view to Novel view: Synthesizing Novel Views with Self-Learned Confidence published in ECCV 2018. All you need to do is play with the model code: synthesizer.py.

The model can be trained on images rendered from 3D object models (ShapeNet) as well as real and synthesized scenes (KITTI and Synthia). All datasets are stored as HDF5 files, they will be downloaded once you run the training code.

<p align="center"> <img src="./asset/shapenet_example.gif" width="720"/> </p> <p align="center"> <img src="./asset/kitti_example.gif" width="720"/> </p> <p align="center"> <img src="./asset/synthia_example.gif" width="720"/> </p>

Prerequisites

Usage

After downloading the datasets, we can start to train and test models using the following commands

Train

Train a model from scratch

$ python trainer.py --batch_size 32 --dataset car

Fine-tune a model from a checkpoint

$ python trainer.py --batch_size 32 --dataset car --checkpoint /path/to/model/model-XXX

Interpret TensorBoard

Launch Tensorboard and go to the specified port, you can see differernt losses in the scalars tab and plotted images in the images tab. The scalars include L1 loss and SSIM. The plotted images show (from top to bottom): source view, target view, and the predicted target view.

<p align="center"> <img src="./asset/TB.png" width="680"/> </p>

Test

Evaluate trained models

$ python evaler.py --dataset car --loss True --plot_image True --output_dir car_image --write_summary True --summary_file log_car.txt --train_dir train_dir/default-car-bs_16_lr_0.0001-num_input-1-20190430-014454 --data_id_list ./testing_tuple_lists/id_car_random_elevation.txt
<p align="center"> <img src="./asset/car_result.png" width="800"/> </p>
Checkpoint: train_dir/default-car-bs_16_lr_0.0001-20190430-014454/model-160000
Dataset: car
Id list: ./testing_tuple_lists/id_car_elevation_0.txt
[Final Avg Report] Total datapoint: 10000 from ./testing_tuple_lists/id_car_elevation_0.txt
[Loss]
l1_loss: 0.13343
ssim: 0.90811
[Time] (63.128 sec)

Related work

The code is mainly borrowed from this paper

Check out some other work in novel view synthesis

Cite the paper

If you find this useful, please cite

@inproceedings{sun2018multiview,
  title={Multi-view to Novel View: Synthesizing Novel Views with Self-Learned Confidence},
  author={Sun, Shao-Hua and Huh, Minyoung and Liao, Yuan-Hong and Zhang, Ning and Lim, Joseph J},
  booktitle={European Conference on Computer Vision},
  year={2018},
}

Authors

Shao-Hua Sun