Awesome
CoSA
Weakly Supervised Co-training with Swapping Assignments for Semantic Segmentation
Xinyu Yang, Hossein Rahmani, Sue Black, Bryan M. Williams
Overview
We propose an end-to-end framework for WSSS: Co-training with Swapping Assignments (CoSA),
<p align="middle"> <img src="./assets/overview.png" alt="CoSA pipeline" width="1200px"> </p>Usage
1. Data Preparation
<details> <summary> COCO dataset </summary>1. Download and Extract COCO 2014
mkdir coco
cd coco
wget http://images.cocodataset.org/zips/train2014.zip
wget http://images.cocodataset.org/zips/val2014.zip
wget http://images.cocodataset.org/zips/test2014.zip
unzip ./train2014.zip
unzip ./val2014.zip
unzip ./test2014.zip
2. Download Segmentation Labels
Here to download the COCO segmentation labels coco_anno.tar
and move it to coco
dir. Or use the following command to download it directly to the server:
wget https://github.com/youshyee/CoSA/releases/download/ann_coco/coco_anno.tar
After that you should extract it by running:
tar -xvf coco_anno.tar
then you should have a directory structure like this (the number in the brackets is the number of images):
coco/
├── SegmentationClass
│ ├── train2014 (82081)
│ └── val2014 (40137)
├── test2014 (40775)
├── train2014 (82783)
└── val2014 (40504)
</details>
<details>
<summary>
VOC dataset
</summary>
1. Download PASCAL VOC 2012 and Extract
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
tar –xvf VOCtrainval_11-May-2012.tar
2. Download the augmented annotations
Here is a download link of the augmented annotations. Or use the following command to download it directly to the server:
wget https://github.com/youshyee/CoSA/releases/download/ann_voc/SegmentationClassAug.zip
After downloading SegmentationClassAug.zip
, you should unzip it and move it to VOCdevkit/VOC2012
. The directory structure should be like this (the number in the brackets is the number of images):
VOCdevkit/
└── VOC2012
├── Annotations
├── ImageSets
├── JPEGImages (17125)
├── SegmentationClass
├── SegmentationClassAug (12031)
└── SegmentationObject (2913)
</details>
2. Setup Python Environment
We recommend using Anaconda to create a virtual environment.
conda create -yn cosa python=3.10 pip wheel
conda activate cosa
pip install -r requirements.txt
after that, you can install some extension packages: mmcv
, bilateralfilter
and pydensecrf
by running:
mim install mmcv-lite
pip install git+https://github.com/lucasb-eyer/pydensecrf.git
cd utils/bilateralfilter
#sudo apt install swig
swig -python -c++ bilateralfilter.i
python setup.py install
3. Train and Evaluate
### train and eval coco, you may need to modify the `coco_root` in `run_coco.sh` to the path of your COCO dataset.
sh run_coco.sh
### train and eval voc, you may need to modify the `voc12_root` in `run_voc.sh` to the path of your VOC dataset.
sh run_voc.sh
Tested Environment
- Ubuntu 20.04 LTS x86_64
- CUDA 12.1
- NVIDIA GeForce RTX 3090 x2
- Python 3.10
Our Results
Semantic performance on VOC and COCO. Logs and weights are available now.
Dataset | Backbone | Val | Test | Log | Weight |
---|---|---|---|---|---|
COCO | ViT-B | 51.0 | - | log | weight |
VOC | ViT-B | 76.2 | 75.1 | log | weight |
Visualization results for CoSA comparing with MCT, ToCo and BECO on COCO:
<p align="middle"> <img src="./assets/coco1.png" alt="COCO Visual1" width="1200px"> </p> <p align="middle"> <img src="./assets/coco2.png" alt="COCO Visual2" width="1200px"> </p>Visualization results for CoSA comparing with MCT, ToCo and BECO on VOC:
<p align="middle"> <img src="./assets/voc1.png" alt="VOC Visual1" width="1200px"> </p>The code and weights for CoSA-MS are coming soon.
Citation
Please cite our work if you find it helpful:
@article{yang2024weakly,
title={Weakly supervised co-training with swapping assignments for semantic segmentation},
author={Yang, Xinyu and Rahmani, Hossein and Black, Sue and Williams, Bryan M},
journal={arXiv preprint arXiv:2402.17891},
year={2024}
}
Acknowledgement
This repo is heavily built upon ToCo and MCT. Please consider citing their works if you find this repo helpful.