Home

Awesome

Segment Every Out-of-Distribution Object

Score To Mask (S2M) is a simple and efficienty way to utilize anomaly score from current mainstream methods and improve their performance. Experiments demonstrate that S2M outperforms the state-of-the-art by approximately 20% in IoU and 40% in mean F1 score, on average.

Segment Every Out-of-Distribution Object
Wenjie Zhao, Jia Li, Xin Dong, Yu Xiang, Yunhui Guo
UT Dallas, Harvard
CVPR 2024

[arxiv] [bibtex]

Features

Preparation

Download the checkpoint file and put it in ./tools.

Download checkpoint file of SAM-B and put it in ./tools.

Download validation dataset. It should look like this:

${PROJECT_ROOT}
 -- val
     -- fishyscapes
         ...
     -- road_anomaly
         ...
     -- segment_me
         ...

Download train set.It should look like this:

 -- train_dataset
     -- offline_dataset
         ...
     -- offline_dataset_score
         ...
     -- offline_dataset_score_view
         ...
     -- ood.json

Install the environment

Please create a environment with pytoch == 2.0.1 and install package in the requirements.txt. Then install detectron2 with our S2M by following:

  1. Get out of S2M folder.
  2. Install environment of S2M by follow.
python -m pip install -e S2M

Training

Set the detail of training in configs/OE/OE.yaml.

cd ./tools
python3 plain_train_net.py   --config-file ../configs/OE/OE.yaml   --num-gpus 1 SOLVER.IMS_PER_BATCH 4 SOLVER.BASE_LR 0.0025

Evaluation

Set the path of dataset in ./tools/inference.py line 256.

cd ./tools
python3 inference.py   --config-file ../configs/OE/OE.yaml   --eval-only MODEL.WEIGHTS /path_to/model.pth

Acknowledgement

Our project is implemented base on the following projects. We really appreciate their excellent open-source works!

Citation

If our work has been helpful to you, we would greatly appreciate a citation.

@inproceedings{zhao2024segment,
  title={Segment Every Out-of-Distribution Object},
  author={Zhao, Wenjie and Li, Jia and Dong, Xin and Xiang, Yu and Guo, Yunhui},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={3910--3920},
  year={2024}
}