Home

Awesome

HyperSeg - Official PyTorch Implementation

Teaser Example segmentations on the PASCAL VOC dataset.

This repository contains the source code for the real-time semantic segmentation method described in the paper:

HyperSeg: Patch-wise Hypernetwork for Real-time Semantic Segmentation
Conference on Computer Vision and Pattern Recognition (CVPR), 2021
Yuval Nirkin, Lior Wolf, Tal Hassner
Paper

Abstract: We present a novel, real-time, semantic segmentation network in which the encoder both encodes and generates the parameters (weights) of the decoder. Furthermore, to allow maximal adaptivity, the weights at each decoder block vary spatially. For this purpose, we design a new type of hypernetwork, composed of a nested U-Net for drawing higher level context features, a multi-headed weight generating module which generates the weights of each block in the decoder immediately before they are consumed, for efficient memory utilization, and a primary network that is composed of novel dynamic patch-wise convolutions. Despite the usage of less-conventional blocks, our architecture obtains real-time performance. In terms of the runtime vs. accuracy trade-off, we surpass state of the art (SotA) results on popular semantic segmentation benchmarks: PASCAL VOC 2012 (val. set) and real-time semantic segmentation on Cityscapes, and CamVid.

Installation

Install the following packages:

git clone https://github.com/YuvalNirkin/hyperseg
cd hyperseg
conda env create -f hyperseg_env.yml
conda activate hyperseg
pip install -e .    # Alternatively add the root directory of the repository to PYTHONPATH.

Next, download the models and datasets:

Models

TemplateDatasetResolutionmIoU (%)FPSLink
HyperSeg-LPASCAL VOC512x51280.6 (val)-download
HyperSeg-MCityScapes1024x51276.2 (val)36.9download
HyperSeg-SCityScapes1536x76878.2 (val)16.1download
HyperSeg-SCamVid768x57678.4 (test)38.0download
HyperSeg-LCamVid1024x76879.1 (test)16.6-

The models FPS was measured on an NVIDIA GeForce GTX 1080TI GPU.

Either download the models under <project root>/weights or adjust the model variable in the test configuration files.

Datasets

Dataset# ImagesClassesResolutionLink
PASCAL VOC10,58221up to 500x500auto downloaded
CityScapes5,000192048x1024download
CamVid70112960x720download

Either download the datasets under <project root>/data or adjust the data_dir variable in the configuration files.

Training

To train the HyperSeg-M model on Cityscapes, set the exp_dir and data_dir paths in cityscapes_efficientnet_b1_hyperseg-m.py and run:

python configs/train/cityscapes_efficientnet_b1_hyperseg-m.py

Testing

Testing a model after training

For example testing the HyperSeg-M model on Cityscapes validation set:

python test.py 'checkpoints/cityscapes/cityscapes_efficientnet_b1_hyperseg-m' \
-td "hyperseg.datasets.cityscapes.CityscapesDataset('data/cityscapes',split='val',mode='fine')" \
-it "seg_transforms.LargerEdgeResize([512,1024])"

Testing a pretrained model

For example testing the PASCAL VOC HyperSeg-L model using the available test configuration:

python configs/test/vocsbd_efficientnet_b3_hyperseg-l.py

Citation

@inproceedings{nirkin2021hyperseg,
  title={{HyperSeg}: Patch-wise Hypernetwork for Real-time Semantic Segmentation},
  author={Nirkin, Yuval and Wolf, Lior and Hassner, Tal},
  booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  month={June},
  year={2021}
}