Home

Awesome

MDQE: Mining Discriminative Query Embeddings to Segment Occluded Instances on Challenging Videos

Minghan LI, Shuai LI, Wangmeng XAING, Lei ZHANG

[arXiv]

<div align="center"> <img src="imgs/MDQE_overview.jpg" width="90%" height="100%"/> </div><br/> <div align="center"> <img src="imgs/inter_mask.jpg" width="90%" height="100%"/> </div><br/>

Updates

Installation

See installation instructions.

Getting Started

We provide a script train_net.py, that is made to train all the configs provided in MDQE.

Before training: To train a model with "train_net.py" on VIS, first setup the corresponding datasets following Preparing Datasets for MDQE.

Then download pretrained weights in the Model Zoo into the path 'pretrained/coco/*.pth', and run:

python train_net.py --num-gpus 8 \
  --config-file configs/R50_ovis_360.yaml 

To evaluate a model's performance, use

python train_net.py \
  --config-file configs/R50_ovis_360.yaml \
  --eval-only \
  MODEL.WEIGHTS /path/to/checkpoint_file

<a name="ModelZoo"></a>Model Zoo

Pretrained weights on COCO

NameR50Swin-L
MDQEmodel, configmodel, config

OVIS

NameBackboneFramesAPDownload
MDQER50f4+360p30.7model, config
MDQER50f4+640p32.3model, config
MDQESwin-Lf2+480p41.0model, config
MDQESwin-Lf2+640p42.6model, config

YouTubeVIS-2021

NameBackboneFramesAPDownload
MDQER50f4+360p46.6model, config
MDQESwin-Lf3+360p55.5model, config

YouTubeVIS-2019

NameBackboneFramesAPDownload
MDQER50f4+360p47.8model, config
MDQESwin-Lf3+360p59.9model, config

License

The majority of MDQE is licensed under the Apache-2.0 License. However, portions of the project are available under separate license terms: Detectron2(Apache-2.0 License), IFC(Apache-2.0 License), VITA(Apache-2.0 License), and Deformable-DETR(Apache-2.0 License).

<a name="CitingMDQE"></a>Citing MDQE

If you use MDQE in your research or wish to refer to the baseline results published in the Model Zoo, please use the following BibTeX entry.

@misc{li2023mdqe,
    title={MDQE: Mining Discriminative Query Embeddings to Segment Occluded Instances on Challenging Videos},
    author={Minghan Li and Shuai Li and Wangmeng Xiang and Lei Zhang},
    year={2023},
    eprint={2303.14395},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}

Acknowledgement

Our code is largely based on Detectron2, IFC, Deformable DETR and VITA. We are truly grateful for their excellent work.