Awesome
E3D: Event-Based Shape Reconstruction
Dependencies
Installing Pytorch3D
- [Linux - Ubuntu16+ or/ CentOS 7]
- [Python 3.6+]
- [Pytorch 1.0+]
- [gcc & g++ 4.9+]
- [fvcore]
- [CUDA 9.2+ (If CUDA is to be used)]
Create an Anaconda Environment:
conda env create -n pytorch3d python=3.7 --file env.yml
conda activate pytorch3d
Install a version of pytorch and torchvision suitable for your environment, see Pytorch for instructions. For example:
#CPU Only
conda install pytorch torchvision cpuonly -c pytorch
#CUDA 10.2
conda install pytorch torchvision cudatoolkit=10.2 -c pytorch
Install Pytorch3d with CUDA Support (Change for your cuda version):
conda install -c conda-forge -c fvcore -c iopath fvcore iopath
conda install -c pytorch3d pytorch3d
If you run into an "Unsatisfiable Error" with the current version of your CUDA driver then you should install Pytorch3D nighty build:
conda install -c pytorch3d-nightly pytorch3d
Installing Pytorch3D without CUDA Support:
pip install "git+https://github.com/facebookresearch/pytorch3d.git"
Other Dependencies
Installing RPG Vid2e for the event generator. Cmake required to build the code
git clone https://github.com/alexisbdr/rpg_vid2e.git --recursive
conda install -y -c conda-forge opencv tqdm scikit-video eigen boost boost-cpp pybind11
pip install -e .
Installing PMO.
git clone https://github.com/alexisbdr/photometric-mesh-optim.git
Installing pydvs for EVIMO event preprocessing:
git clone https://github.com/AlanJiang98/pydvs.git
cd lib
sudo python3 setup.py install
Pre-Trained Models
Synthetic Data Models
Category | Drive Link |
---|---|
car | link |
chair | link |
dolphin (baseline) | link |
dolphin (fine-tuned) | link |
------------- | ------------- |
car (PMO - Events) | link |
chair (PMO - Events) | link |
EVIMO Data Models
Category | Drive Link |
---|---|
car | link |
plane | link |
Datasets
Synthtic Datasets
Toy datasets of both car and chair are provided with the code to ease reproducibility. We recommend running with the toy datasets (as described below) to reproduce the results seen in the paper/
You will need at least 20G of space to download the full datasets
The datasets must be downloaded to data/renders
Name | Category | Drive Link |
---|---|---|
test_car_subset | car | link |
test_chair_subset | chair | link |
train_car_shapenet | car | link |
test_car_shapenet | car | link |
train_chair_shapenet | chair | link |
test_chair_shapenet | chair | link |
train_dolphin | dolphin | link |
test_dolphin | dolphin | link |
EVIMO Datasets
Please click here to download the EVIMO dataset we use.
We collect 5 event sequences of car object and 3 event sequences of plane object from EVIMO DAVIS346 for E3D. For more collection details, please refer to the page. We have generated the event frames from the raw data in EVIMO. If you want to generate your own event frames, please download the raw data from EVIMO DAVIS346 and refer to the pydvs.
Running E3D
Experiment settings are in the json file in config/
directory. You can change the settings in the json file for your experiment. Also,
experiment settings are formulated in class Param
in the ./utils/params.py
. You can change the default settings of each item.
Evaluation with Pre-trained Models
For synthetic datasets:
python predict.py --cfg ./config/synth/config.json --gpu 0 --segpose_model_cpt /YourModelPath --name /YourExperimentName
For EVIMO datasets:
python predict.py --cfg ./config/evimo/config.json --gpu 0 --segpose_model_cpt /YourModelPath --name /YourExperimentName
Training
For synthetic datasets:
python train-segpose.py --gpu 0 --config ./config/synth/config.json
For EVIMO datasets:
python train-segpose.py --gpu 0 --config ./config/evimo/config.json
Generating a synthetic event dataset
Default parameters are in synth_dataset/params.py
cd synth_dataset
python generate_dataset.py --gpu 0 --name test_car --category car
The dataset will be generated in data/renders by default
Contributing
Any contributions are much appreciated!! The repository uses pre-commit to clean the code before committing. To install run:
conda install -c conda-forge pre-commit
pre-commit install
This should apply the pre-commit hooks. If this is your first time contributing to an open source project, follow the guidelines here