Home

Awesome

Localizing Visual Sounds the Easy Way

Official codebase for EZ-VSL. EZ-VSL is a simple yet effective approach for Visual Sound Localization. Please check out the paper for full details.

Localizing Visual Sounds the Easy Way<br> Shentong Mo, Pedro Morgado<br> arXiv 2022.

<div align="center"> <img width="100%" alt="EZ-VSL Illustration" src="images/framework.png"> </div>

Environment

To setup the environment, please simply run

pip install -r requirements.txt

Datasets

Flickr-SoundNet

Data can be downloaded from Learning to localize sound sources

VGG-Sound Source

Data can be downloaded from Localizing Visual Sounds the Hard Way

VGG-SS Unheard & Heard Test Data

Data can be downloaded from Unheard and Heard

Model Zoo

We release several models pre-trained with EZ-VSL with the hope that other researchers might also benefit from them.

MethodTrain SetTest SetCIoUAUCurlTrainTest
EZ-VSLFlickr 10kFlickr SoundNet81.9362.58modelscriptscript
EZ-VSLFlickr 144kFlickr SoundNet83.1363.06modelscriptscript
EZ-VSLVGG-Sound 144kFlickr SoundNet83.9463.60modelscriptscript
EZ-VSLVGG-Sound 10kVGG-SS37.1838.75modelscriptscript
EZ-VSLVGG-Sound 144kVGG-SS38.8539.54modelscriptscript
EZ-VSLVGG-Sound FullVGG-SS39.3439.78modelscriptscript
EZ-VSLHeard 110Heard 11037.2538.97modelscriptscript
EZ-VSLHeard 110Unheard 11039.5739.60modelscriptscript

Train & Test

For training an EZ-VSL model, please run

python train.py --multiprocessing_distributed \
    --train_data_path /path/to/Flickr-all/ \
    --test_data_path /path/to/Flickr-SoundNet/ \
    --test_gt_path /path/to/Flickr-SoundNet/Annotations/ \
    --experiment_name flickr_10k \
    --trainset 'flickr_10k' \
    --testset 'flickr' \
    --epochs 100 \
    --batch_size 128 \
    --init_lr 0.0001

For testing and visualization, simply run

python test.py --test_data_path /path/to/Flickr-SoundNet/ \
    --test_gt_path /path/to/Flickr-SoundNet/Annotations/ \
    --model_dir checkpoints \
    --experiment_name flickr_10k \
    --save_visualizations \
    --testset 'flickr' \
    --alpha 0.4

The training script supports the following training sets: flickr, flickr_10k, flickr_144k, vggss, vggss_10k, vggss_144k or vggss_heard.

For evaluation, it supports the following test sets: flickr, vggss, vggss_heard, vggss_unheard.

Visualizations

The test.py script saves the predicted localization maps for all test images when the flag --save_visualizations is provided. All visualizations for OGL, AVL and EZ-VSL localization maps are saved under {model_dir}/{experiment_name}/viz/. Here's some examples.

<div align="center"> <img width="100%" alt="Visualizations" src="images/visualization.png"> </div>

Citation

If you find this repository useful, please cite our paper:

@article{mo2022EZVSL,
  title={Localizing Visual Sounds the Easy Way},
  author={Mo, Shentong and Morgado, Pedro},
  journal={arXiv preprint arXiv:2203.09324},
  year={2022}
}