Awesome
Feature Shrinkage Pyramid for Camouflaged Object Detection with Transformers (CVPR2023)
<p align="left"> <a href="https://arxiv.org/abs/2303.14816"><img src="https://img.shields.io/badge/Paper-arXiv-green"></a> <a href="https://tzxiang.github.io/project/COD-FSPNet/index.html"><img src="https://img.shields.io/badge/Page-Project-blue"></a> </p>Usage
The training and testing experiments are conducted using PyTorch with 8 Tesla V100 GPUs of 36 GB Memory.
1. Prerequisites
Note that FSPNet is only tested on Ubuntu OS with the following environments.
- Creating a virtual environment in terminal:
conda create -n FSPNet python=3.8
. - Installing necessary packages:
pip install -r requirements.txt
2. Downloading Training and Testing Datasets
- Download the training set (COD10K-train) used for training
- Download the testing sets (COD10K-test + CAMO-test + CHAMELEON + NC4K ) used for testing
3. Training Configuration
- The pretrained model is stored in Google Drive and Baidu Drive (xuwb). After downloading, please change the file path in the corresponding code.
- Run
train.sh
orslurm_train.sh
as needed to train.
4. Testing Configuration
Our well-trained model is stored in Google Drive and Baidu Drive (otz5). After downloading, please change the file path in the corresponding code.
5. Evaluation
- Matlab code: One-key evaluation is written in MATLAB code, please follow this the instructions in
main.m
and just run it to generate the evaluation results. - Python code: After configuring the test dataset path, run
slurm_eval.py
in therun_slurm
folder for evaluation.
6. Results download
The prediction results of our FSPNet are stored on Google Drive and Baidu Drive (ryzg) please check.
Citation
@inproceedings{Huang2023Feature,
title={Feature Shrinkage Pyramid for Camouflaged Object Detection with Transformers},
author={Huang, Zhou and Dai, Hang and Xiang, Tian-Zhu and Wang, Shuo and Chen, Huai-Xin and Qin, Jie and Xiong, Huan},
booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2023}
}
Thanks to Deng-Ping Fan, Ge-Peng Ji, et al. for a series of efforts in the field of COD.