Home

Awesome

PyTorch implementation of the CVPR-2021 paper: Positive Sample Propagation along the Audio-Visual Event Line

paper link

Audio-Visual Event (AVE) Localization task

AVE localization aims to find out those video segments containing an audio-visual event and classify its category. An audio-visual event is both audible and visible, which means the sound source must appear in visual image (visible) while the sound it makes also exists in audio portion (audible).

AVE localization

Our Framework

framework

Data preparation

The AVE dataset and the extracted audio and visual features can be downloaded from https://github.com/YapengTian/AVE-ECCV18. Other preprocessed files used in this repository can be downloaded from here. All the required data are listed as below, and these files should be placed into the data folder.

<pre><code> audio_feature.h5 visual_feature.h5 audio_feature_noisy.h5 visual_feature_noisy.h5 right_label.h5 prob_label.h5 labels_noisy.h5 mil_labels.h5 train_order.h5 val_order.h5 test_order.h5 </code></pre>

Fully supervised setting

CUDA_VISIBLE_DEVICES=0 python fully_supervised_main.py --model_name PSP --threshold=0.099 --train

CUDA_VISIBLE_DEVICES=0 python fully_supervised_main.py --model_name PSP --threshold=0.099 --trained_model_path ./model/PSP_fully.pt

Weakly supervised setting

CUDA_VISIBLE_DEVICES=0 python weakly_supervised_main.py --model_name PSP --threshold=0.095 --train

CUDA_VISIBLE_DEVICES=0 python weakly_supervised_main.py --model_name PSP --threshold=0.095 --trained_model_path ./model/PSP_weakly.pt

Note: The pre-trained models can be downloaded here and they should be placed into the model folder. If you would like to train from scratch for the both settings, you may make some adjustments to further improve the performance (e.g., try another threshold value, choose a different initialization method and so on).

Citation

If our paper is useful for your research, please consider citing it:

<pre><code>@InProceedings{zhou2021positive, title={Positive Sample Propagation along the Audio-Visual Event Line}, author={Zhou, Jinxing and Zheng, Liang and Zhong, Yiran and Hao, Shijie and Wang, Meng}, booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2021}, } </code></pre>

Acknowledgements

This code began with YapengTian/AVE-ECCV18. Thanks for their great work. We also hope our source code can help people who are interested in our work or the audio-visual related problems. If you have any questions about our paper or the codes, please feel free to open an issue or contact us by email.