Home

Awesome

Single-Stage Semantic Segmentation from Image Labels

License Framework

This repository contains the original implementation of our paper:

Single-stage Semantic Segmentation from Image Labels<br> Nikita Araslanov and Stefan Roth<br> CVPR 2020. [pdf] [supp] [arXiv]

Contact: Nikita Araslanov fname.lname@visinf.tu-darmstadt.de

<img src="figures/results.gif" alt="drawing" width="480"/><br>
We attain competitive results by training a single network model <br> for segmentation in a self-supervised fashion using only <br> image-level annotations (one run of 20 epochs on Pascal VOC).

Setup

  1. Minimum requirements. This project was originally developed with Python 3.6, PyTorch 1.0 and CUDA 9.0. The training requires at least two Titan X GPUs (12Gb memory each).

  2. Setup your Python environment. Please, clone the repository and install the dependencies. We recommend using Anaconda 3 distribution:

    conda create -n <environment_name> --file requirements.txt
    
  3. Download and link to the dataset. We train our model on the original Pascal VOC 2012 augmented with the SBD data (10K images in total). Download the data from:

    Link to the data:

    ln -s <your_path_to_voc> <project>/data/voc
    ln -s <your_path_to_sbd> <project>/data/sbd
    

    Make sure that the first directory in data/voc is VOCdevkit; the first directory in data/sbd is benchmark_RELEASE.

  4. Download pre-trained models. Download the initial weights (pre-trained on ImageNet) for the backbones you are planning to use and place them into <project>/models/weights/.

    BackboneInitial WeightsComment
    WideResNet38ilsvrc-cls_rna-a1_cls1000_ep-0001.pth (402M)Converted from mxnet
    VGG16vgg16_20M.pth (79M)Converted from Caffe
    ResNet50resnet50-19c8e357.pthPyTorch official
    ResNet101resnet101-5d3b4d8f.pthPyTorch official

Training, Inference and Evaluation

The directory launch contains template bash scripts for training, inference and evaluation.

Training. For each run, you need to specify names of two variables, for example

EXP=baselines
RUN_ID=v01

Running bash ./launch/run_voc_resnet38.sh will create a directory ./logs/pascal_voc/baselines/v01 with tensorboard events and will save snapshots into ./snapshots/pascal_voc/baselines/v01.

Inference. To generate final masks, please, use the script ./launch/infer_val.sh. You will need to specify:

Evaluation. To compute IoU of the masks, please, run ./launch/eval_seg.sh. You will need to specify SAVE_DIR that contains the masks and FILELIST specifying the split for evaluation.

Pre-trained model

For testing, we provide our pre-trained WideResNet38 model:

BackboneValVal (+ CRF)Link
WideResNet3859.762.7model_enc_e020Xs0.928.pth (527M)

The also release the masks predicted by this model:

SplitIoUIoU (+ CRF)LinkComment
train-clean (VOC+SBD)64.766.9train_results_clean.tgz (2.9G)Reported IoU is for VOC
val-clean63.465.3val_results_clean.tgz (423M)
val59.762.7val_results.tgz (427M)
test62.764.3test_results.tgz (368M)

The suffix -clean means we used ground-truth image-level labels to remove masks of the categories not present in the image. These masks are commonly used as pseudo ground truth to train another segmentation model in fully supervised regime.

Acknowledgements

We thank PyTorch team, and Jiwoon Ahn for releasing his code that helped in the early stages of this project.

Citation

We hope that you find this work useful. If you would like to acknowledge us, please, use the following citation:

@InProceedings{Araslanov:2020:SSS,
author = {Araslanov, Nikita and Roth, Stefan},
title = {Single-Stage Semantic Segmentation From Image Labels},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
pages = {4253--4262}
year = {2020}
}