Home

Awesome

OSFormer: One-Stage Camouflaged Instance Segmentation with Transformers (ECCV 2022)

OSFormer

Official Implementation of "OSFormer: One-Stage Camouflaged Instance Segmentation with Transformers"

Jialun Pei*, Tianyang Cheng*, Deng-Ping Fan, He Tang, Chuanbo Chen, and Luc Van Gool

[Paper]; [Chinese Version]; [Official Version]; [Project Page]

Contact: dengpfan@gmail.com, peijl@hust.edu.cn

Sample 1Sample 2Sample 3Sample 4
<img src="docs/COD10K-CAM-3-Flying-53-Bird-3024.gif" height=125 width=170><img src="docs/COD10K-CAM-3-Flying-65-Owl-4620.gif" height=125 width=170><img src="docs/488.gif" height=125 width=170><img src="docs/4126.gif" height=125 width=170>

Environment preparation

The code is tested on CUDA 11.1 and pytorch 1.9.0, change the versions below to your desired ones.

git clone https://github.com/PJLallen/OSFormer.git
cd OSFormer
conda create -n osformer python=3.8 -y
conda activate osformer
conda install pytorch==1.9.0 torchvision cudatoolkit=11.1 -c pytorch -c nvidia -y
python -m pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu111/torch1.9/index.html
python setup.py build develop

Dataset preparation

Download the datasets

Register datasets

  1. generate coco annotation files, you may refer to the tutorial of mmdetection for some help
  2. change the path of the datasets as well as annotations in adet/data/datasets/cis.py, please refer to the docs of detectron2 for more help
# adet/data/datasets/cis.py
# change the paths 
DATASET_ROOT = 'COD10K-v3'
ANN_ROOT = os.path.join(DATASET_ROOT, 'annotations')
TRAIN_PATH = os.path.join(DATASET_ROOT, 'Train/Image')
TEST_PATH = os.path.join(DATASET_ROOT, 'Test/Image')
TRAIN_JSON = os.path.join(ANN_ROOT, 'train_instance.json')
TEST_JSON = os.path.join(ANN_ROOT, 'test2026.json')

NC4K_ROOT = 'NC4K'
NC4K_PATH = os.path.join(NC4K_ROOT, 'Imgs')
NC4K_JSON = os.path.join(NC4K_ROOT, 'nc4k_test.json')

Pre-trained models

Model weights: Baidu (password:l6vn) / Google / Quark

ModelConfigCOD10K-test APNC4K-test AP
R50-550configs/CIS_RT.yaml36.041.4
R50configs/CIS_R50.yaml41.042.5
R101configs/CIS_R101.yaml42.044.4
PVTv2-B2-Liconfigs/CIS_PVTv2B2Li47.250.5
SWIN-Tconfigs/CIS_SWINT.yaml47.750.2

Visualization results

The visual results are achieved by our OSFormer with ResNet-50 trained on the COD10K training set.

Frequently asked questions

FAQ

Usage

Train

python tools/train_net.py --config-file configs/CIS_R50.yaml --num-gpus 1 \
  OUTPUT_DIR {PATH_TO_OUTPUT_DIR}

Please replace {PATH_TO_OUTPUT_DIR} to your own output dir

Inference

python tools/train_net.py --config-file configs/CIS_R50.yaml --eval-only \
  MODEL.WEIGHTS {PATH_TO_PRE_TRAINED_WEIGHTS}

Please replace {PATH_TO_PRE_TRAINED_WEIGHTS} to the pre-trained weights

Eval

python demo/demo.py --config-file configs/CIS_R50.yaml \
  --input {PATH_TO_THE_IMG_DIR_OR_FIRE} \
  --output {PATH_TO_SAVE_DIR_OR_IMAGE_FILE} \
  --opts MODEL.WEIGHTS {PATH_TO_PRE_TRAINED_WEIGHTS}

Acknowledgement

This work is based on:

We also get help from mmdetection. Thanks them for their great work!

Citation

If this helps you, please cite this work:

@inproceedings{pei2022osformer,
  title={OSFormer: One-Stage Camouflaged Instance Segmentation with Transformers},
  author={Pei, Jialun and Cheng, Tianyang and Fan, Deng-Ping and Tang, He and Chen, Chuanbo and Van Gool, Luc},
  booktitle={European conference on computer vision},
  year={2022},
  organization={Springer}
}