Home

Awesome

๐Ÿ’ฌStepping-Stones

Here is the official PyTorch implementation of ''Stepping Stones: A Progressive Training Strategy for Audio-Visual Semantic Segmentation''. Please refer to our ECCV 2024 paper for more details.

Paper Title: "Stepping Stones: A Progressive Training Strategy for Audio-Visual Semantic Segmentation"

Authors: Juncheng Ma, Peiwen Sun, Yaoting Wang and Di Hu

Accepted by: European Conference on Computer Vision(ECCV 2024)

๐Ÿš€: Project page here: Project Page

๐Ÿ“„: Paper here: Paper

๐Ÿ”: Supplementary material: Supplementary

Overview

Audio-Visual Segmentation (AVS) aims to achieve pixel-level localization of sound sources in videos, while Audio-Visual Semantic Segmentation (AVSS), as an extension of AVS, further pursues semantic understanding of audio-visual scenes. However, since the AVSS task requires the establishment of audio-visual correspondence and semantic understanding simultaneously, we observe that previous methods have struggled to handle this mashup of objectives in end-to-end training, resulting in insufficient learning and sub-optimization. Therefore, we propose a two-stage training strategy called Stepping Stones, which decomposes the AVSS task into two simple subtasks from localization to semantic understanding, which are fully optimized in each stage to achieve step-by-step global optimization. This training strategy has also proved its generalization and effectiveness on existing methods. To further improve the performance of AVS tasks, we propose a novel framework Adaptive Audio Visual Segmentation, in which we incorporate an adaptive audio query generator and integrate masked attention into the transformer decoder, facilitating the adaptive fusion of visual and audio features. Extensive experiments demonstrate that our methods achieve state-of-the-art results on all three AVS benchmarks.

<img width="1009" alt="image" src="image/teaser.png">

Results

Quantitative comparision

MethodS4MS3AVSSReference
mIoUF-scoremIoUF-scoremIoUF-score
AVSBench78.787.954.064.529.835.2ECCV'2022
AVSC80.688.258.265.1--ACM MM'2023
CATR81.489.659.070.032.838.5ACM MM'2023
DiffusionAVS81.490.258.270.9--ArXiv'2023
ECMVAE81.790.157.870.8--CVPR'2023
AuTR80.489.156.267.2--ArXiv'2023
SAMA-AVS81.588.663.169.1--WACV'2023
AQFormer81.689.461.172.1--IJCAI'2023
AVSegFormer82.189.958.469.336.742.0AAAI'2024
AVSBG81.790.455.166.8--AAAI'2024
GAVS80.190.263.777.4--AAAI'2024
MUTR81.589.865.073.0--AAAI'2024
AAVS(Ours)83.291.367.377.648.5*53.2*ECCV'2024

$^*$ indicates that the model uses the Stepping Stones strategy.

Quantitative comparision

Single Sound Source Segmentation(S4): <img width="1009" alt="image" src="image/s4.png">

Multiple Sound Source Segmentation(MS3): <img width="1009" alt="image" src="image/ms3.png">

Audio-Visual Semantic Segmentation(AVSS): <img width="1009" alt="image" src="image/v2.png">

Code instruction

Data Preparation

Please refer to the link AVSBenchmark to download the datasets. You can put the data under data folder or rename your own folder. Remember to modify the path in config files. The data directory is as bellow:

|--data
   |--v2
   |--v1m
   |--v1s
   |--metadata.csv

Pre-trained backbone

We use Mask2Former model with Swin-B pre-trained on ADE20k as backbone, which could be downloaded in this link. Don't forget to modify the path in config.py.

In addition, we changed some metadata of the backbone, and you should replace the config.json and preprocessor_config.json in ".models" folder by ones provided by us (for avs and avss subtasks respectively).

Download checkpoints

We provides checkpoints for all three subtasks. You can download them from the following links for quick evaluation.

SubsetmIoUF-scoreDownload
S483.1891.33ckpt
MS367.3077.63ckpt
AVSS48.5053.20ckpt

Testing

At first, you should modify paths in config.py.

For S4 and MS3 subtasks, you can run the following code to test.

cd avs
sh test.sh

For AVSS subtask, you should put predicted masks without semantic from trained AVSS model in the following format firstly, and modify the ''mask_path'' in config.py. Or you can download results used by us in this link.

|--masks
   |--v2
      |--_aldtLqTVYI_1000_11000
         |--0.png
         |--...
   |--v1m
   |--v1s

Then, you can run the following code to test.

cd avss
sh test.sh

Training

For S4 and MS3 subtasks, you can run the following code to train:

Remember to modify the config.

cd avs
sh train_avs.sh

For stepping_stones for AVSS subtask, you can run the following code to train:

Remember to put predicted masks without semantic in the right way and modify the config.

cd avss
sh train_avss.sh

Citation

If you find this work useful, please consider citing it.

@article{ma2024steppingstones,
          title={Stepping Stones: A Progressive Training Strategy for Audio-Visual Semantic Segmentation},
          author={Ma, Juncheng and Sun, Peiwen and Wang, Yaoting and Hu, Di},
          journal={IEEE European Conference on Computer Vision (ECCV)},
          year={2024},
         }