Home

Awesome

UniMatch V2

PWC PWC PWC PWC PWC PWC PWC PWC

This codebase contains the official PyTorch implementation of <b>UniMatch V2</b>:

UniMatch V2: Pushing the Limit of Semi-Supervised Semantic Segmentation</br> Lihe Yang, Zhen Zhao, Hengshuang Zhao</br> Preprint, 2024

<p align="left"> <img src="./docs/framework.png" width=90% height=90% class="center"> </p>

TL;DR: We upgrade our UniMatch V1 by switching the outdated ResNet encoders to the most capable DINOv2 encoders. We unify the image-level and feature-level augmentations into a single learnable stream to challenge the powerful model. Based on this, we further design a Complementary Dropout to craft better dual views.

Results

We provide the training log of each reported value. You can refer to them during reproducing. We also provide all the checkpoints of our core experiments.

Pascal VOC 2012

MethodEncoder1/16 (92)1/8 (183)1/4 (366)1/2 (732)Full (1464)
UniMatch V1ResNet-10175.277.278.879.981.2
AllSparkMiT-B576.178.479.880.882.1
SemiVLCLIP-Base84.085.686.086.787.3
UniMatch V2DINOv2-Base86.387.988.990.090.8

Cityscapes

MethodEncoder1/16 (186)1/8 (372)1/4 (744)1/2 (1488)
UniMatch V1ResNet-10176.677.979.279.5
AllSparkMiT-B578.379.280.681.4
SemiVLCLIP-Base77.979.480.380.6
UniMatch V2DINOv2-Base83.684.384.585.1

ADE20K

MethodEncoder1/64 (316)1/32 (631)1/16 (1263)1/8 (2526)
UniMatch V1ResNet-10121.628.131.534.6
SemiVLCLIP-Base33.735.137.239.4
UniMatch V2DINOv2-Base38.745.046.749.8

COCO

MethodEncoder1/512 (232)1/256 (463)1/128 (925)1/64 (1849)1/32 (3697)
UniMatch V1ResNet-10131.938.944.448.249.8
AllSparkMiT-B534.141.745.549.6---
SemiVLCLIP-Base50.152.853.655.456.5
UniMatch V2DINOv2-Base47.955.858.760.463.3

Real-World Large-Scale SSS Setting

In addition to the above traditional SSS settings, we also explore a real-world large-scale setting, where substantial images (e.g., 10K) have already been annotated, and menatime much more unlabeled images (e.g., 100K) are available. It is challenging but highly meaningful.

Labeled Data (# Img)+ Unlabeled Data (# Img)Improvement
COCO (118K)COCO Extra (123K)66.4 → 67.1
ADE20K (20K)COCO Labeled (118K)54.1 → 54.9
ADE20K (20K)COCO All (118K + 123K)54.1 → 55.7
Cityscapes (3K)Cityscapes Extra (20K)85.2 → 85.5

Getting Started

Pre-trained Encoders

DINOv2-Small | DINOv2-Base | DINOv2-Large

├── ./pretrained
    ├── dinov2_small.pth
    ├── dinov2_base.pth
    └── dinov2_large.pth

Datasets

Please modify your dataset path in configuration files.

The ADE20K and COCO annotations have already been pre-processed by us. You can use them directly.

├── [Your Pascal Path]
    ├── JPEGImages
    └── SegmentationClass
    
├── [Your Cityscapes Path]
    ├── leftImg8bit
    └── gtFine

├── [Your ADE20K Path]
    ├── images
    │   ├── training
    │   └── validation
    └── annotations
        ├── training
        └── validation

├── [Your COCO Path]
    ├── train2017
    ├── val2017
    └── masks

Training

UniMatch V2

# use torch.distributed.launch
sh scripts/train.sh <num_gpu> <port>
# to fully reproduce our results, the <num_gpu> should be set as 4 on all four datasets
# otherwise, you need to adjust the learning rate accordingly

# or use slurm
# sh scripts/slurm_train.sh <num_gpu> <port> <partition>

To train on other datasets or splits, please modify dataset and split in train.sh.

FixMatch

Modify the method from 'unimatch_v2' to 'fixmatch' in train.sh.

Supervised Baseline

Modify the method from 'unimatch_v2' to 'supervised' in train.sh.

Citation

If you find this project useful, please consider citing:

@article{unimatchv2,
  title={UniMatch V2: Pushing the Limit of Semi-Supervised Semantic Segmentation},
  author={Yang, Lihe and Zhao, Zhen and Zhao, Hengshuang},
  journal={arXiv:2410.10777},
  year={2024}
}