Awesome
CoADNet-CoSOD
CoADNet: Collaborative Aggregation-and-Distribution Networks for Co-Salient Object Detection (NeurIPS 2020)
Datasets
We employ COCO-SEG as our training dataset, which covers 78 different object categories containing totally 200k labeled images. There is also an auxiliary dataset DUTS, which is a popular benchmark dataset (the training split) for (single-image) salient object detection.
We employ four datasets for performance evaluation, as listed below:
- Cosal2015: 50 categories, 2015 images.
- iCoseg: 38 categories, 643 images.
- MSRC: 7 categories, 210 images.
- CoSOD3k: 160 categories, 3316 images.
Put all the above datasets as well as the corresponding info files under ../data
folder.
Training
- Download backbone networks and put them under
./ckpt/pretrained
- Run
Pretrain.py
to pretrain the whole network, which helps to learn saliency cues and speeds up convergence. - Run
Train-COCO-SEG-S1.py
to train the whole network on the COCO-SEG dataset. Note that, since COCO-SEG is modified from a generic semantic segmentation dataset (MS-COCO) and thus may ignore the crucial saliency patterns, we need a post-refinement procedure as conducted inTrain-COCO-SEG-S2.py
. When using other more appropriate training datasets such as CoSOD3k, we skip this procedure.
Testing
We organize the testing codes in a Jupyter notebook test.ipynb
, which performs testing on all the four evaluation datasets.
Note that there is an is_shuffle
option during testing, which enables us to perform multiple trials to output more robust predictions.