Home

Awesome

Introduction

vedaseg is an open source semantic segmentation toolbox based on PyTorch.

Features

License

This project is released under the Apache 2.0 license.

Benchmark and model zoo

Note: All models are trained only on PASCAL VOC 2012 trainaug dataset and evaluated on PASCAL VOC 2012 val dataset.

ArchitecturebackboneOSMS & FlipmIOU
DeepLabv3plusResNet-10116True79.46%
DeepLabv3plusResNet-10116False77.90%
DeepLabv3ResNet-10116True79.22%
DeepLabv3ResNet10116False77.08%
FPNResNet-1014True77.05%
FPNResNet-1014False75.64%
PSPNetResNet-1018True78.39%
PSPNetResNet-1018False77.30%
PSPNetResNet_v1c-1018True79.88%
PSPNetResNet_v1c-1018False78.85%
U-NetResNet-1011True74.58%
U-NetResNet-1011False72.59%

OS: Output stride used during evaluation.
MS: Multi-scale inputs during evaluation.
Flip: Adding horizontal flipped inputs during evaluation.
ResNet_v1c: Modified stem from original ResNet, as shown in Figure 2(b) in this paper.

Models above are available in the GoogleDrive.

Installation

Requirements

We have tested the following versions of OS and softwares:

Install vedaseg

  1. Create a conda virtual environment and activate it.
conda create -n vedaseg python=3.6.9 -y
conda activate vedaseg
  1. Install PyTorch and torchvision following the official instructions, e.g.,
conda install pytorch torchvision -c pytorch
  1. Clone the vedaseg repository.
git clone https://github.com/Media-Smart/vedaseg.git
cd vedaseg
vedaseg_root=${PWD}
  1. Install dependencies.
pip install -r requirements.txt

Prepare data

VOC data

Download Pascal VOC 2012 and Pascal VOC 2012 augmented (you can get details at Semantic Boundaries Dataset and Benchmark), resulting in 10,582 training images(trainaug), 1,449 validatation images.

cd ${vedaseg_root}
mkdir ${vedaseg_root}/data
cd ${vedaseg_root}/data

wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
wget http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/semantic_contours/benchmark.tgz

tar xf VOCtrainval_11-May-2012.tar
tar xf benchmark.tgz

python ../tools/encode_voc12_aug.py
python ../tools/encode_voc12.py

mkdir VOCdevkit/VOC2012/EncodeSegmentationClass
#cp benchmark_RELEASE/dataset/encode_cls/* VOCdevkit/VOC2012/EncodeSegmentationClass
(cd benchmark_RELEASE/dataset/encode_cls; cp * ${vedaseg_root}/data/VOCdevkit/VOC2012/EncodeSegmentationClass)
#cp VOCdevkit/VOC2012/EncodeSegmentationClassPart/* VOCdevkit/VOC2012/EncodeSegmentationClass
(cd VOCdevkit/VOC2012/EncodeSegmentationClassPart; cp * ${vedaseg_root}/data/VOCdevkit/VOC2012/EncodeSegmentationClass)

comm -23 <(cat benchmark_RELEASE/dataset/{train,val}.txt VOCdevkit/VOC2012/ImageSets/Segmentation/train.txt | sort -u) <(cat VOCdevkit/VOC2012/ImageSets/Segmentation/val.txt | sort -u) > VOCdevkit/VOC2012/ImageSets/Segmentation/trainaug.txt

To avoid tedious operations, you could save the above linux commands as a shell file and execute it.

COCO data

Download the COCO-2017 dataset.

cd ${vedaseg_root}
mkdir ${vedaseg_root}/data
cd ${vedaseg_root}/data
mkdir COCO2017 && cd COCO2017
wget -c http://images.cocodataset.org/zips/train2017.zip
unzip train2017.zip && rm train2017.zip
wget -c http://images.cocodataset.org/zips/val2017.zip
unzip val2017.zip &&  rm val2017.zip
wget -c http://images.cocodataset.org/annotations/annotations_trainval2017.zip
unzip annotations_trainval2017.zip && rm annotations_trainval2017.zip

Folder structure

The folder structure should similar as following:

data
├── COCO2017
│   ├── annotations
│   │   ├── instances_train2017.json
│   │   ├── instances_val2017.json
│   ├── train2017
│   ├── val2017
│── VOCdevkit
│   │   ├── VOC2012
│   │   │   ├── JPEGImages
│   │   │   ├── SegmentationClass
│   │   │   ├── ImageSets
│   │   │   │   ├── Segmentation
│   │   │   │   │   ├── trainaug.txt
│   │   │   │   │   ├── val.txt

Train

  1. Config

Modify configuration files in configs/ according to your needs(e.g. configs/voc_unet.py).

The major configuration difference between single-label and multi-label training lies in: nclasses, multi_label, metricsand criterion. You can take configs/coco_multilabel_unet.py as a reference. Currently, multi-label training is only supported in COCO data format.

  1. Ditributed training
# train pspnet using GPUs with gpu_id 0, 1, 2, 3
./tools/dist_train.sh configs/voc_pspnet.py "0, 1, 2, 3" 
  1. Non-distributed training
python tools/train.py configs/voc_unet.py

Snapshots and logs by default will be generated at ${vedaseg_root}/workdir/name_of_config_file(you can specify workdir in config files).

Test

  1. Config

Modify configuration as you wish(e.g. configs/voc_unet.py).

  1. Ditributed testing
# test pspnet using GPUs with gpu_id 0, 1, 2, 3
./tools/dist_test.sh configs/voc_pspnet.py path/to/checkpoint.pth "0, 1, 2, 3" 
  1. Non-distributed testing
python tools/test.py configs/voc_unet.py path/to/checkpoint.pth

Inference

  1. Config

Modify configuration as you wish(e.g. configs/voc_unet.py).

  1. Run
# visualize the results in a new window
python tools/inference.py configs/voc_unet.py checkpoint_path image_file_path --show

# save the visualization results in folder which named with image prefix, default under folder './result/'
python tools/inference.py configs/voc_unet.py checkpoint_path image_file_path --out folder_name

Deploy

  1. Convert to ONNX

Firstly, install volksdep following the official instructions.

Then, run the following code to convert PyTorch to ONNX. The input shape format is CxHxW. If you need the ONNX model with dynamic input shape, please add --dynamic_shape in the end.

python tools/torch2onnx.py configs/voc_unet.py weight_path out_path --dummy_input_shape 3,513,513 --opset_version 11

Here are some known issues:

  1. Inference SDK

Firstly, install flexinfer and see the example for details.

Contact

This repository is currently maintained by Yuxin Zou (@YuxinZou), Tianhe Wang(@DarthThomas), Hongxiang Cai (@hxcai), Yichao Xiong (@mileistone).