Awesome
MotionPriorCMax (ECCV 2024)
The official repository for Motion-prior Contrast Maximization for Dense Continuous-Time Motion Estimation accepted at ECCV 2024 by Friedhelm Hamann, Ziyun Wang, Ioannis Asmanis, Kenneth Chaney, Guillermo Gallego, Kostas Daniilidis.
<h2 align="left">Paper | Video (5min) | Talk (20min, NeuroPAC) | Data
</h2>Table of Contents
Quickstart
Clone the repository and set up a conda environment for your project:
git clone https://github.com/tub-rip/MotionPriorCMax
conda create --name motionpriorcm python=3.10
conda activate motionpriorcm
Install PyTorch by choosing a command that matches your CUDA version. You can find the compatible commands on the PyTorch official website (tested with PyTorch 2.2.2), e.g.:
conda install pytorch torchvision pytorch-cuda=12.1 -c pytorch -c nvidia
Install other required packages:
pip install -r requirements.txt
The training script has logging functionality based on wandb. If you want to use it, you can install it with:
pip install -U 'wandb>=0.12.10'
Otherwise, ignore this installation and the basic metrics are logged by tensorboard.
Optical Flow
Inference on DSEC
- Download the test data from the official source and copy or link it to a folder called data:
cd <repository-root>
mkdir -p data/dsec/test && cd data/dsec/test
wget https://download.ifi.uzh.ch/rpg/DSEC/test_coarse/test_events.zip
unzip test_events.zip
rm test_events.zip
- Download the EVIMO2 continuous flow groundtruth data and copy them to the same folder structure as the event data.
MotionPriorCmax
└── weights
└── unet_dsec_ours-poly-k1_Tab4L7.pth
└── data
└── dsec
└── test
├── interlaken_00_a
├── interlaken_00_b
└── ...
- Run the inference script:
cd <repository-root>
python scripts/dsec_inference.py --config config/exe/dsec_inference/config.yaml
- You'll find the predicted flow in
output/dsec_inference/YYYYMMDD_HHMMSS/flow
. To verify the results upload the zippedflow
folder on the DSEC benchmark page.
Training on DSEC
- For training, additionally download the training data:
cd <repository-root>
mkdir -p data/dsec/train && cd data/dsec/train
wget https://download.ifi.uzh.ch/rpg/DSEC/train_coarse/train_events.zip
unzip train_events.zip
rm train_events.zip
- Run training on the DSEC dataset:
python scripts/flow_training.py --gpus 0 1 --config config/exe/flow_training/dsec.yaml
We trained on two A6000 GPUs, with a batch size of 14. For optimal results, you might need to adapt the learning rate when training with other setups.
To run your own model on the DSEC test set, select the corresponding checkpoint, extract the model weights using scripts/extract_weights_from_checkpoint.py
, update the model path in the config and run the inference script as described.
Trajectory Prediction
Inference on EVIMO2
-
Download EVIMO2 from the official source. Detailed steps can be found on the web page. You can use their script to download the whole dataset. However, for the inference you'll only need the 8 validation sequences for the motion segmentation task. You can also download them manually.
-
Download the EVIMO2 continuous flow groundtruth data and copy them to the same folder structure as the event data.
-
Download the checkpoint file(s). You find all checkpoints here. The checkpoints have the format
<model>_<dataset>_<exp-name>_<paper-ref>
. E.g. the model trained with our motion prior cmax loss on EVIMO2 is calledraft-spline_evimo2-300ms_ours-selfsup_Tab2L5.ckpt
. The resulting structure should look like this:
MotionPriorCmax
└── weights
└── raft-spline_evimo2-300ms_ours-selfsup_Tab2L5.ckpt
└── data
└── evimo2
└── samsung_mono
└── imo
└── eval
├── scene13_dyn_test_00_000000
├── dataset_events_p.npy
├── dataset_events_t.npy
├── dataset_events_xy.npy
└── dataset_multiflow_10steps_vis.h5
└── ...
- Now you can run inference for any of the models like this
python scripts/trajectory_inference.py \
model=raft-spline \
dataset=evimo2_300ms \
+experiment=raft-spline_evimo2-300ms_ours-selfsup_Tab2L5 \
hardware.gpus=0
Switch the experiment config according to the chosen checkpoint.
Inference on MultiFlow
(Details to be added)
Citation
If you use this work in your research, please consider citing:
@InProceedings{Hamann24eccv,
author = {Friedhelm Hamann, Ziyun Wang, Ioannis Asmanis, Kenneth Chaney, Guillermo Gallego, Kostas Daniilidis},
title = {Motion-prior Contrast Maximization for Dense Continuous-Time Motion Estimation},
booktitle = {European Conference on Computer Vision (ECCV)},
pages = {18--37},
doi = {https://doi.org/10.1007/978-3-031-72646-0_2},
year = 2024,
organization = {Springer}
}
Acknowledgements
Many of the low-level functions for contrast maximization are inspired by the implementation Secrets of Event-based Optical Flow and the implementation of the Bflow network was influenced by BFlow. We thank the authors for their excellent work.