Home

Awesome

<div align="center"> <h2>SegPrompt: Boosting Open-world Segmentation via Category-level Prompt Learning</h2>

Muzhi Zhu<sup>1</sup>,   Hengtao Li<sup>1</sup>,   Hao Chen<sup>1</sup>,   Chengxiang Fan<sup>1</sup>,   Weian Mao<sup>2,1</sup>,   Chenchen Jing<sup>1</sup>,   Yifan Liu<sup>2</sup>,   Chunhua Shen<sup>1</sup>

<sup>1</sup>Zhejiang University,   <sup>2</sup>The University of Adelaide,  

<img src="assets/framework.png" width="800"/> </div>

News

Installation

Please follow the instructions in Mask2Former

Other requirements

pip install torchshow
pip install torch-scatter -f https://data.pyg.org/whl/torch-1.10.1+cu113.html
pip install lvis
pip install setuptools==59.5.0
pip install seaborn

LVIS-OW benchmark

Here we provide our proposed new benchmark LVIS-OW.

Dataset preparation

First prepare COCO and LVIS dataset, place them under $DETECTRON2_DATASETS following Detectron2

The dataset structure is as follows:

datasets/
  coco/
    annotations/
      instances_{train,val}2017.json
    {train,val}2017/
  lvis/
    lvis_v1_{train,val}.json

We reorganize the dataset and divide the categories into Known-Seen-Unseen to better evaluate the open-world model. The json files can be downloaded from here.

Or you can directly use the command to generate from the json file of COCO and LVIS.

bash tools/prepare_lvisow.sh 

After you successfully get lvis_v1_train_ow.json and lvis_v1_val_resplit_r.json, you can refer to here to register the training set and test set. Then you can use our benchmark for training and testing.

Evaluation on LVIS-OW

python tools/eval_lvis_ow.py --dt-json-file output/m2f_binary_lvis_ow/lvis_r/inference/lvis_instances_results.json

Acknowledgement

We thank the following repos for their great works:

Cite our Paper

If you found this project useful for your paper, please kindly cite our paper.

🎫 License

For non-commercial academic use, this project is licensed under the 2-clause BSD License. For commercial use, please contact Chunhua Shen.