Home

Awesome

Dual Transfer Learning for Event-based End-task Prediction via Pluggable Event to Image Translation (ICCV'21)

We have updated our paper by correcting some typos and adding more references.

Please refer to the Arxiv paper at https://arxiv.org/pdf/2109.01801.pdf for the latest information.

Citation

If you find this resource helpful, please cite the paper as follows:

@inproceedings{wang2021dual,
  title={Dual transfer learning for event-based end-task prediction via pluggable event to image translation},
  author={Wang, Lin and Chae, Yujeong and Yoon, Kuk-Jin},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={2135--2145},
  year={2021}
}

image

image

image

Setup

Download

git clone  https://github.com/addisonwang2013/DTL/

Make your own environment

conda create -n myenv python=3.7
conda activate myenv

Install the requirements

cd evdistill

pip install -r requirements.txt

Download example validation data (general and LDR visual condtions) from this link: DDD17 example data

Download the pretrained models from this link: checkpoints

Modify the python configurations.py in the configs folder with the relevant paths to the test data and checkpoints

Visualizing semantic segmentation results for general and LDR visual condtions:

python visualize.py

Note

In this work, for convenience, the event data are embedded and stored as multi-channel event images, which are the paired with the aps frames. It is also possible to directly feed event raw data after embedding to the student network directly with aps frames.

Acknowledgement

The skeleton code is inspired by Deeplab-v3-Plus and EDSR (https://github.com/sanghyun-son/EDSR-PyTorch)

License

MIT