Home

Awesome

UniRef++: Segment Every Reference Object in Spatial and Temporal Spaces

Official implementation of UniRef++, an extended version of ICCV2023 UniRef.

UniRef

Highlights

Schedule

Results

https://github.com/FoundationVision/UniRef/assets/21001460/63d875ed-9f5b-47c9-998f-e83faffedbba

Referring Image Segmentation

RIS

Referring Video Object Segmentation

RVOS

Video Object Segmentation

VOS

Zero-shot Video Segmentation & Few-shot Image Segmentation

zero-few-shot

Model Zoo

Objects365 Pretraining

ModelCheckpoint
R50model
Swin-Lmodel

Imge-joint Training

ModelRefCOCOFSS-1000Checkpoint
R5076.385.2model
Swin-L79.987.7model

Video-joint Training

The results are reported on the validation set.

ModelRefCOCOFSS-1000Ref-Youtube-VOSRef-DAVIS17Youtube-VOS18DAVIS17LVOSCheckpoint
UniRef++-R5075.679.161.563.581.981.560.1model
UniRef++-Swin-L79.185.466.967.283.283.967.2model

Installation

See INSTALL.md

Getting Started

Please see DATA.md for data preparation.

Please see EVAL.md for evaluation.

Please see TRAIN.md for training.

Citation

If you find this project useful in your research, please consider cite:

@article{wu2023uniref++,
  title={UniRef++: Segment Every Reference Object in Spatial and Temporal Spaces},
  author={Wu, Jiannan and Jiang, Yi and Yan, Bin and Lu, Huchuan and Yuan, Zehuan and Luo, Ping},
  journal={arXiv preprint arXiv:2312.15715},
  year={2023}
}
@inproceedings{wu2023uniref,
  title={Segment Every Reference Object in Spatial and Temporal Spaces},
  author={Wu, Jiannan and Jiang, Yi and Yan, Bin and Lu, Huchuan and Yuan, Zehuan and Luo, Ping},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={2538--2550},
  year={2023}
}

Acknowledgement

The project is based on UNINEXT codebase. We also refer to the repositories Detectron2, Deformable DETR, STCN, SAM. Thanks for their awsome works!