Awesome
DISCONTINUATION OF PROJECT
This project will no longer be maintained by Intel.
Intel has ceased development and contributions including, but not limited to, maintenance, bug fixes, new releases, or updates, to this project.
Intel no longer accepts patches to this project.
If you have an ongoing need to use this project, are interested in independently developing it, or would like to maintain patches for the open source software community, please create your own fork of this project.
Free View Synthesis
Code repository for "Free View Synthesis", ECCV 2020.
Setup
Install the following Python packages in your Python environment
- numpy (1.19.1)
- scikit-image (0.15.0)
- pillow (7.2.0)
- pytorch (1.6.0)
- torchvision (0.7.0)
Clone the repository and initialize the submodule
git clone https://github.com/intel-isl/FreeViewSynthesis.git
cd FreeViewSynthesis
git submodule update --init --recursive
Finally, build the Python extension needed for preprocessing
cd ext/preprocess
cmake -DCMAKE_BUILD_TYPE=Release .
make
Tested with Ubuntu 18.04 and macOS Catalina. If you do not have a C++17 compatible compiler, you can change the code as descibed here.
Run Free View Synthesis
Make sure you adapted the paths in config.py
to point to the downloaded data!
You can download the pre-trained models here
# in FreeViewSynthesis directory
wget https://storage.googleapis.com/isl-datasets/FreeViewSynthesis/experiments.tar.gz
tar xvzf experiments.tar.gz
# there should now be net*params files in exp/experiments/*/
Then run the evaluation via
python exp.py --net rnn_vgg16unet3_gruunet4.64.3 --cmd eval --iter last --eval-dsets tat-subseq --eval-scale 0.5
This will run the pretrained network on the four Tanks and Temples sequences.
To train the network from scratch you can run
python exp.py --net rnn_vgg16unet3_gruunet4.64.3 --cmd retrain
Data
We provide the preprocessed Tanks and Temples dataset as we used it for training and evaluation here. Our new recordings can be downloaded in a preprocessed version from here.
We used COLMAP for camera registration, multi-view stereo and surface reconstruction on full resolution. The packages above contain the already undistorted and registered images. In addition, we provide the estimated camera calibrations, rendered depthmaps used for warping, and closest source image information.
In more detail, a single folder ibr3d_*_scale
(where scale
is the scale factor with respect to the original images) contains:
im_XXXXXXXX.[png|jpg]
the downsampled images used as source images, or as target images.dm_XXXXXXXX.npy
the rendered depthmaps based on the COLMAP surface reconstruction.Ks.npy
contains the3x3
intrinsic camera matrices, whereKs[idx]
corresponds to the depth mapdm_{idx:08d}.npy
.Rs.npy
contains the3x3
rotation matrices from the world coordinate system to camera coordinate system.ts.npy
contains the3
translation vectors from the world coordinate system to camera coordinate system.count_XXXXXXXX.npy
contains the overlap information from target images to source images. I.e., the number of pixels that can be mapped from the target image to the individual source images.np.argsort(np.load('count_00000000.npy'))[::-1]
will give you the sorted indices of the most overlapping source images.
Use np.load
to load the numpy files.
We use the Tanks and Temples dataset for training except the following scenes that are used for evaluation.
- train/Truck
[172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196]
- intermediate/M60
[94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129]
- intermediate/Playground
[221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252]
- intermediate/Train
[174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248]
The numbers below the scene name indicate the indices of the target images that we used for evaluation.
Citation
Please cite our paper if you find this work useful.
@inproceedings{Riegler2020FVS,
title={Free View Synthesis},
author={Riegler, Gernot and Koltun, Vladlen},
booktitle={European Conference on Computer Vision},
year={2020}
}