Home

Awesome

Codes for AAAI 2024 paper "Finding Visual Saliency in Continuous Spike Stream"

This repository contains the official codes for AAAI 2024 paper Finding Visual Saliency in Continuous Spike Stream.

Requirements

To installl requirements, run:

conda create -n svs python==3.7
pip install -r requirements.txt

Data Organization

SVS Dataset

Download the SVS[w2ba] dataset, then organize data as following format:

root_dir
    SpikeData
        |----00001
        |     |-----spike_label_format
        |     |-----spike_numpy
        |     |-----spike_repr
        |     |-----label
        |----00002
        |     |-----spike_label_format
        |     |-----spike_numpy
        |     |-----spike_repr
        |     |-----label
        |----...

Where label contains the saliency labels, spike_numpy contains the compress spike data, spike_repr contains the interval spike representation, spike_label_format contains instance labels.

Training

Training on SVS dataset

To train the model on SVS dataset, just modify the dataset root $cfg.DATA.ROOT in config.py, --step is used for multi-step, --clip is used for multi-step loss, then run following command:

python train.py --gpu ${GPU-IDS} --exp_name ${experiment} --step --clip

Testing

Download the model pretrained on SVS dataset multi_step[vn2x].

python inference.py --checkpoint ${./multi_step.pth} --results ${./results/SVS} --step

Download the model pretrained on SVS dataset single_step[scc0].

python inference.py --checkpoint ${./single_step.pth} --results ${./results/SVS}

The results will be saved as indexed png file at ${results}/SVS.

Additionally, you can modify some setting parameters in config.py to change configuration.

Acknowledgement

This codebase is built upon official DCFNet repository and official Spikformer repository. We modify the code from eval-co-sod to evaluate the results.