Awesome
Crowdsampling the Plenoptic Function
This repository contains a PyTorch implementation of the paper:
Crowdsampling The Plenoptic Function, ECCV 2020.
[Project Website] [Paper] [Video]
Zhengqi Li, Wenqi Xian, Abe Davis, Noah Snavely
Dataset
Download and unzip data from the links below:
- [Trevi Fountain]
- [Piazza Navona]
- [Top of the Rock]
- [Pantheon]
- [Sacre Coeur]
- [Lincoln Memorial]
- [Eiffel Tower]
- [Mount Rushmore]
Read more about the dataset in Readme file.
Dependency
The code is tested with Pytorch >= 1.2, the depdenency library includes
- matplotlib
- opencv
- scikit-image
- scipy
- json
Pretrained Model
Download and unzip pretrained models from link.
To use the pretrained model, put the folders under the project root directory.
Test the pretrained model:
To run the evaluation, change variable "root" and "data_dir" in script evaluation.py to code directory and data directory respectively. The released code is not highly optimized, so you have to use 4 GPUs with > 11GB memory to run the evaluation.
Dataset | Name | Max depth | FOV |
---|---|---|---|
Trevi Fountain | trevi | 4 | 70 |
The Pantheon | pantheon | 25 | 65 |
Top of the Rock | rock | 75 | 70 |
Sacre Coeur | coeur | 20 | 65 |
Piazza Navona | navona | 25 | 70 |
Follow the commands below:
# Usage
# python evaluation.py --dataset <name> --max_depth <max depth> --ref_fov <fov> --warp_src_img 1
python evaluation.py --dataset trevi --max_depth 4 --ref_fov 70 --warp_src_img 1
Demo of novel view synthesis:
# Usage
# python wander.py --dataset <name> --max_depth <max depth> --ref_fov <fov> --warp_src_img 1 --where_add adain --img_a_name xxx --img_b_name xxx --img_c_name xxx
python wander.py --dataset trevi --max_depth 4 --ref_fov 70 --warp_src_img 1 --where_add adain --img_a_name 5094768508_fa56e355bd.jpg -
-img_b_name 34558526690_e5ba5b3b9d.jpg --img_c_name 34558526690_e5ba5b3b9d.jpg
where
- img_a_name: image associated with rendering target viewpoint,
- set img_b_name=img_c_name: image whose apperance we would like to condition on. The results will be saved in folder demo_wander_trevi.
By running the example command, you should get the following result:
Demo of apperance inteporlation:
# Usage
# python interpolate_appearance.py --dataset <name> --max_depth <max depth> --ref_fov <fov> --warp_src_img 1 --where_add adain --img_a_name xxx --img_b_name xxx --img_c_name xxx
python interpolate_appearance.py --dataset trevi --max_depth 4 --ref_fov 70 --warp_src_img 1 --where_add adain --img_a_name 157303382_3ca2b644c9.jpg --img_b_name 255196242_3f46e98a0f_o.jpg --img_c_name 157303382_3ca2b644c9.jpg
where
- img_a_name: image of starting apperance
- img_b_name: image of end apperance
- img_c_name: image associated with rendering target viewpoint
Cite
Please cite our work if you find it useful:
@inproceedings{li2020crowdsampling,
title={Crowdsampling the plenoptic function},
author={Li, Zhengqi and Xian, Wenqi and Davis, Abe and Snavely, Noah},
booktitle={European Conference on Computer Vision},
pages={178--196},
year={2020},
organization={Springer}
}
License
This repository is released under the MIT license.