Home

Awesome

AlignSeg: Feature-Aligned Segmentation Networks

Pytorch code for the TPAMI paper entitled AlignSeg: Feature-Aligned Segmentation Networks. This is a minimal code to run Alignseg on Cityscape dataset. Shortly afterwards, the code will be reorganized with MMSegmentation.

Architecture

Overview of Alignseg

Requirements && Install

Python 3.7

2 x 32g GPUs (e.g. V100)

# Install **Apex**
$ git clone https://github.com/NVIDIA/apex
$ cd apex
$ pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./

# Install **Inplace-ABN**
$ git clone https://github.com/mapillary/inplace_abn.git
$ cd inplace_abn
$ python setup.py install

Dataset and pretrained model

Plesae download cityscapes dataset and unzip the dataset into YOUR_CS_PATH.

Please download MIT imagenet pretrained resnet101-imagenet.pth, and put it into dataset folder.

Training and Evaluation

./run_local.sh YOUR_CS_PATH alignseg 120000 872,872 1

Models

OHEMmIOU on cityscape val set (single scale)Link
NO80.3model
YES81.4model

Citing

If you find this code useful in your research, please consider citing:

@article{huang2021alignseg,
    title={Alignseg: Feature-aligned segmentation networks},
    author={Huang, Zilong and Wei, Yunchao and Wang, Xinggang and Shi, Humphrey and Liu, Wenyu and Huang, Thomas S},
    journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
    year={2021},
    publisher={IEEE}
}

Visualization of the offset maps

Overview of offset maps Some visualization of offsets learned in different aggregation stages on the Cityscapes \emph{val} set. The visualizations of each sample are displayed in two rows. The image with its ground truth are given in the first column. The following 4 columns represent the offsets in four AlignFA modules, respectively. The upper row contains the offset maps $\Delta^A$ and the lower row contains the offset maps $\Delta^F$. The 1st AlignFA is closer to the input layer, and the 4th AlignFA is closer to the output layer.