Home

Awesome

<p align=center>Concealed Object Detection (IEEE TPAMI)</p>

PyTorch implementation of our extended model, termed as Search and Identification Network (SINet-V2), for the COD task.

Authors: Deng-Ping Fan, Ge-Peng Ji, Ming-Ming Cheng* & Ling Shao.

1. Features

<p align="center"> <img src="./imgs/SINet-V2-Award.png"/> <br /> </p>

If you have any questions about our paper, feel free to contact me via e-mail (gepengai.ji@gmail.com). And if you are using our code and evaluation toolbox for your research, please cite this paper (BibTeX).

2. :fire: NEWS :fire:

3. Overview

<p align="center"> <img src="./imgs/TaskRelationship.png"/> <br /> <em> Figure 1: Task relationship. One of the most popular directions in computer vision is generic object detection. Note that generic objects can be either salient or camouflaged; camouflaged objects can be seen as difficult cases of generic objects. Typical generic object detection tasks include semantic segmentation and panoptic segmentation (see Fig. 2 b). </em> </p> <p align="center"> <img src="./imgs/CamouflagedTask.png"/> <br /> <em> Figure 2: Given an input image (a), we present the ground-truth for (b) panoptic segmentation (which detects generic objects including stuff and things), (c) salient instance/object detection (which detects objects that grasp human attention), and (d) the proposed camouflaged object detection task, where the goal is to detect objects that have a similar pattern (e.g., edge, texture, or color) to the natural habitat. In this case, the boundaries of the two butterflies are blended with the bananas, making them difficult to identify. This task is far more challenging than the traditional salient object detection or generic object detection. </em> </p>

References of Salient Object Detection (SOD) benchmark works<br> [1] Video SOD: Shifting More Attention to Video Salient Object Detection. CVPR, 2019. (Project Page)<br> [2] RGB SOD: Salient Objects in Clutter: Bringing Salient Object Detection to the Foreground. ECCV, 2018. (Project Page)<br> [3] RGB-D SOD: Rethinking RGB-D Salient Object Detection: Models, Datasets, and Large-Scale Benchmarks. TNNLS, 2020. (Project Page)<br> [4] Co-SOD: Taking a Deeper Look at the Co-salient Object Detection. CVPR, 2020. (Project Page)

4. Proposed Framework

4.1. Training/Testing

The training and testing experiments are conducted using PyTorch with a single GeForce RTX TITAN GPU of 24 GB Memory.

Note that our model also supports low memory GPU, which means you should lower the batch size.

  1. Prerequisites:

    Note that SINet-V2 is only tested on Ubuntu OS with the following environments. It may work on other operating systems (i.e., Windows) as well but we do not guarantee that it will.

    • Creating a virtual environment in terminal: conda create -n SINet python=3.6.

    • Installing necessary packages: PyTorch > 1.1, opencv-python

  2. Prepare the data:

    • downloading testing dataset and moving it into ./Dataset/TestDataset/, which can be found in Google Drive.

    • downloading training/validation dataset and move it into ./Dataset/TrainValDataset/, which can be found in Google Drive

    • downloading pre-trained weights and move it into ./snapshot/SINet_V2/Net_epoch_best.pth, which can be found in Google Drive.

    • downloading Res2Net weights on ImageNet dataset download link (Google Drive).

  3. Training Configuration:

    • Assigning your costumed path, like --train_save and --train_path in MyTrain_Val.py.

    • Just enjoy it via run python MyTrain_Val.py in your terminal.

  4. Testing Configuration:

    • After you download all the pre-trained models and testing datasets, just run MyTesting.py to generate the final prediction map: replace your trained model directory (--pth_path).

    • Just enjoy it!

3.2 Evaluating your trained model:

One-key evaluation is written in MATLAB code (link), please follow the instructions in ./eval/main.m and just run it to generate the evaluation results in ./res/. The complete evaluation toolbox (including data, map, eval code, and res): link.

3.3 Pre-computed maps:

They can be found in download link(Pytorch results / Jittor results) on four testing dataset, including CHAMELEON, CAMO, COD10K, NC4K.

4. SOTA models

Link: https://github.com/GewelsJI/SINet-V2/blob/main/AWESOME_COD_LIST.md

5. Citation

If you find this project useful, please consider citing:

@article{fan2021concealed,  
 author={Fan, Deng-Ping and Ji, Ge-Peng and Cheng, Ming-Ming and Shao, Ling},  
 title={Concealed Object Detection},   
 journal={IEEE TPAMI}, 
 year={2022},  
 volume={44},  
 number={10},  
 pages={6024-6042},  
 doi={10.1109/TPAMI.2021.3085766}
}

@inproceedings{fan2020camouflaged,
  title={Camouflaged object detection},
  author={Fan, Deng-Ping and Ji, Ge-Peng and Sun, Guolei and Cheng, Ming-Ming and Shen, Jianbing and Shao, Ling},
  booktitle={IEEE CVPR},
  pages={2777--2787},
  year={2020}
}

6. FAQ

  1. If the image cannot be loaded on the page (mostly in domestic network situations).

    Solution Link

  2. Erratum: The sub-figure (b) in Figure.17 of our paper is revised as follows. It shows that the decoder in 2019-CVPR-CPD builds the connection flow between $f’_5$ branch and $f’_4$ branch, rather than $f’_4$ branch and $f’_3$ branch.

    <p align="center"> <img src="./imgs/figure17_revision.png" width="200" /> <br /> </p>

7. License

The source code is free for research and education use only. Any commercial usage should get formal permission first.


⬆ back to top