Awesome
OPS: Towards Open-World Segmentation of Parts
CVPR 2023
Tai-Yu Pan, Qing Liu, Wei-Lun Chao, Brian Price
[BibTeX
]
Installation
See Mask2Former. Additaional packages for OPS:
pip install -e kmeans_pytorch
pip install -U scikit-image
Tested environment:
python==3.8.13
torch==1.13.0+cu116
torchaudio==0.13.0+cu116
torchvision==0.14.0+cu116
cudatoolkit==11.6.0
numpy==1.23.5
numba==0.56.3
scikit-image==0.20.0
Demo
Option 1: Streamlit
Interacitve mode to draw masks. Need to install streamlit.
pip install streamlit streamlit-drawable-canvas
then run
streamlit run predict_part_web.py CKPT
For a quick start, one can use demo/5682167295_e61bfbb33e_z.jpg
, an image from COCO.
Option 2: Inference with RGBA images
python predict_part.py CKPT --input IMG_RGBA_1 IMG_RGBA_2 --score-threshold 0.1 --topk 10
Output images are located at demo/outputs
if not specified. See predict_part.py for more information.
For a quick start, one can use demo/5682167295_e61bfbb33e_z_1.png
, a modified RGBA image from COCO. Will get the following:
Training
python train_net_part.py --config-file configs/part_segmentation/SETTING --num-gpus N
For more command line options, please see Mask2Former and train_net_part.py for more information.
License
The majority of OPS is licensed under a MIT License.
However portions of the project are available under separate license terms: Swin-Transformer-Semantic-Segmentation is licensed under the MIT license, Deformable-DETR is licensed under the Apache-2.0 License.
<a name="CitingOPS"></a>Citing OPS
If you use OPS in your research, please use the following BibTeX entry.
@inproceedings{pan2023ops,
title={Towards Open-World Segmentation of Parts},
author={Tai-Yu Pan and Qing Liu and Wei-Lun Chao and Brian Price},
journal={CVPR},
year={2023}
}
Acknowledgement
Code is largely based on Mask2Former (https://github.com/facebookresearch/Mask2Former).