Awesome
Camouflaged Object Detection via Context-aware Cross-level Fusion
Authors: Geng Chen, Si-Jie Liu, Yu-Jia Sun Ge-Peng Ji, Ya-Feng Wu, and Tao Zhou,
1. Preface
-
This repository provides code for "Camouflaged Object Detection via Context-aware Cross-level Fusion". (paper)
-
If you have any questions about our paper, feel free to contact me. And if you are using C2F-Net or evaluation toolbox for your research, please cite this paper (BibTeX).
1.1. Table of Contents
- [Camouflaged Object Detection via Context-aware Cross-level Fusion]
<small><i><a href='http://ecotrust-canada.github.io/markdown-toc/'>Table of contents generated with markdown-toc</a></i></small>
2. Overview
2.1. Introduction
Camouflaged object detection (COD) is a challenging task due to the low boundary contrast between the object and its surroundings. In addition, the appearance of camouflaged objects varies significantly, e.g., object size and shape, aggravating the difficulties of accurate COD. In this paper, we propose a novel Context-aware Cross-level Fusion Network (C2FNet) to address the challenging COD task. Specifically, an attention-induced cross-level fusion module (ACFM) is proposed to fuse high-level features, and a dual-branch global context module (DGCM) is proposed to fully exploit multi-scale context information from the fused features. Two modules are organized in a cascaded manner. The last DGCM provides an initial prediction. We then refine the low-level features with the initial prediction and predict the final COD result with our camouflage inference module (CIM.)
2.2. Framework Overview
<p align="center"> <img src="Images/net.png"/> <br /> <em> Figure 1: The overall architecture of the proposed model, which consists of two key components, i.e., attention-induced cross-level fusion module, dual-branch global context module and camouflage inference module in. See § 3 in the paper for details. </em> </p>2.3. Qualitative Results
<p align="center"> <img src="Images/results.png"/> <br /> <em> Figure 2: Qualitative Results. </em> </p>3. Proposed Method
3.1. Training/Testing
The training and testing experiments are conducted using PyTorch with a single NVIDIA Tesla V100 GPU of 32 GB Memory.
Note that our model also supports low memory GPU, which means you can lower the batch size
-
Configuring your environment (Prerequisites):
Note that PraNet is only tested on Ubuntu OS with the following environments. It may work on other operating systems as well but we do not guarantee that it will.
-
Creating a virtual environment in terminal:
conda create -n C2FNet python=3.6
. -
Installing necessary packages:
pip install -r requirements.txt
.
-
-
Downloading necessary data:
-
downloading testing dataset and move it into
./data/TestDataset/
, which can be found in this download link (Google Drive). -
downloading training dataset and move it into
./data/TrainDataset/
, which can be found in this download link (Google Drive). -
downloading pretrained weights and move it into
./checkpoints/C2FNet/C2FNet-49.pth
, which can be found in this download link (BaiduNetdisk) keys: c0cc. -
downloading Res2Net weights download link (Google Drive).
-
-
Training Configuration:
- Assigning your costumed path, like
--train_save
and--train_path
inMyTrain.py
. - I modify the total epochs and the learning rate decay method (lib/utils.py has been updated), so there are differences from the training setup reported in the paper. Under the new settings, the training performance is more stable.
- Assigning your costumed path, like
-
Testing Configuration:
-
After you download all the pre-trained model and testing dataset, just run
MyTest.py
to generate the final prediction map: replace your trained model directory (--pth_path
). -
Just enjoy it!
-
3.2 Evaluating your trained model:
One-key evaluation is written in MATLAB code (revised from link),
please follow this the instructions in ./eval/main.m
and just run it to generate the evaluation results in.
If you want to speed up the evaluation on GPU, you just need to use the efficient tool link by pip install pysodmetrics
.
Assigning your costumed path, like method
, mask_root
and pred_root
in eval.py
.
Just run eval.py
to evaluate the trained model.
3.3 Pre-computed maps:
pre-computed map can be found in download link(BaiduNetdisk) keys: ihuu
4. Citation
Please cite our paper if you find the work useful:
@article{chen2022camouflaged,
title={Camouflaged Object Detection via Context-aware Cross-level Fusion},
author={Chen, Geng and Liu, Si-Jie and Sun, Yu-Jia and Ji, Ge-Peng and Wu, Ya-Feng and Zhou, Tao},
journal={IEEE Transactions on Circuits and Systems for Video Technology},
year={2022},
publisher={IEEE}
}
5. License
The source code is free for research and education use only. Any comercial use should get formal permission first.