Home

Awesome

GALD-Net-v2 (TIP-2021)

Note that our GALD-v2 (improved version of GALD-v1) has been accept by TIP-2021! It achieves 83.5 mIoU using ResNet101 backbone!.

GALD-Net & Dual-Seg Net (BMVC-2019)

This is PyTorch re-implementation of GALD-net and Dual-Seg. Both papers were accepted by BMVC-2019 and achieve state-of-the-art results on the Cityscapes and Pascal Context datasets.

High Performance Road Scene Semantic Segmentaion :tada:

There is also a co-current repo for Fast Road Scene Semantic Segmentation:Fast_Seg :zap: and thanks for your attention :smiley:

GALDNet

avatar

DualGCNSegNet

avatar

Training & Validation

Requirements

pytorch >= 1.1.0 apex opencv-python

Pretrained Model

Baidu Pan Link: https://pan.baidu.com/s/1MWzpkI3PwtnEl1LSOyLrLw passwd: 4lwf Google Drive Link: https://drive.google.com/file/d/1JlERBWT8fHvf-uD36k5-LRZ5taqUbraj/view?usp=sharing, https://drive.google.com/file/d/1gGzz_6ZHUSC4A3SO0yg8-uLE0iiPdO4H/view?usp=sharing

Training

Note that we use apex to speed up training process. At least 8 gpus with 12GB are needed since we need batch size at least 8 and crop size at least 800 on Cityscapes dataset. Please see train_distribute.py for the details.

sh ./exp/train_dual_seg_r50_city_finetrain.sh

You will get the model with 79.6~79.8 mIoU.

sh ./exp/train_dual_seg_r101_city_finetrain.sh

You will get the model with 80.3~80.4 mIoU.

Validation

sh ./exp/tes_dualseg_r50_city_finetrain.sh

Trained Model

Model trained with the Cityscapes fine dataset:

Dual-Seg-net: ResNet 50, ResNet 101

Some Advice on Training

Please see the Common.md for the details for using the coarse data training. Or you can refer to our GLAD paper(last part) for reference.

GALD-Net (BMVC 2019,arxiv)

We propose Global Aggregation then Local Distribution (GALD) scheme to distribute global information to each position adaptively according to the local information around the position. GALD net achieves top performance on Cityscapes dataset. Both source code and models will be available soon. The work was done at DeepMotion AI Research

DGCNet (BMVC 2019,arxiv)

We propose Dual Graph Convolutional Network (DGCNet) to model the global context of the input feature by modelling two orthogonal graphs in a single framework. (Joint work: University of Oxford, Peking University and DeepMotion AI Research)

Comparisons with state-of-the-art models on Cityscapes dataset

MethodConferenceBackbonemIoU(%)
RefineNetCVPR2017ResNet-10173.6
SACICCV2017ResNet-10178.1
PSPNetCVPR2017ResNet-10178.4
DUC-HDCWACV2018ResNet-10177.6
AAFECCV2018ResNet-10177.1
BiSeNetECCV2018ResNet-10178.9
PSANetECCV2018ResNet-10180.1
DFNCVPR2018ResNet-10179.3
DSSPNCVPR2018ResNet-10177.8
DenseASPPCVPR2018DenseNet-16180.6
OCNet-ResNet-10181.7
CCNetICCV2019ResNet-10181.4
GALD-NetBMVC2019ResNet5080.8
GALD-NetBMVC2019ResNet10181.8
DGCN-NetBMVC2019ResNet10182.0
GALD-Net(use coarse data)BMVC2019ResNet10182.9
GALD-NetV2(use coarse data)TIP2021ResNet10183.5
GALD-Net(use Mapillary)BMVC2019ResNet10183.3

Detailed Results are shown

GALD-Net: here
GFF-Net:here
Both are (Single Model Result)

Citation

Please refer our paper for more detail. If you find the codebase useful, please consider citing our paper.

@inproceedings{xiangtl_gald
title={Global Aggregation then Local Distribution in Fully Convolutional Networks},
author={Li, Xiangtai and Zhang, Li and You, Ansheng and Yang, Maoke and Yang, Kuiyuan and Tong, Yunhai},
booktitle={BMVC2019},
}
@inproceedings{zhangli_dgcn
title={Dual Graph Convolutional Network for Semantic Segmentation},
author={Zhang, Li(*) and Li, Xiangtai(*) and Arnab, Anurag and Yang, Kuiyuan and Tong, Yunhai and Torr, Philip HS},
booktitle={BMVC2019},
}

License

MIT License

Acknowledgement

Thanks to previous open-sourced repo:
Encoding
CCNet
TorchSeg
pytorchseg