Awesome
FSNet: Focus Scanning Network for Camouflaged Object Detection
Authors: Ze Song, Xudong Kang, Xiaohui Wei, Haibo Liu, Renwei Dian, and Shutao Li.
Code implementation of "FSNet: Focus Scanning Network for Camouflaged Object Detection". IEEE TIP 2023.Paper
Prerequisites
Install Prerequisites with the following command:
conda create -n FSNet python = 3.7
conda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.2 -c pytorch
pip install -r requirements.txt
Usage
1. Download pre-trained Swin transformer model
Please download model from the official websites:
2. Prepare data
We use data from publicly available datasets:
-
downloading testing dataset and move it into
./Dataset/TestDataset/
, which can be found in Google Drive or Baidu Drive (extraction code: fapn). -
downloading training/validation dataset and move it into
./Dataset/TrainValDataset/
, which can be found in Google Drive or Baidu Drive (extraction code: fapn).
3. Train
To train FSNet with costumed path:
python MyTrain_Val.py --save_path './snapshot/FSNet/'
4. Test
To test with trained model:
python MyTesting.py --pth_path './snapshot/FSNet/Net_epoch_best.pth'
downloading our weights and move it into ./snapshot/FSNet/
,
which can be found from Google Drive.
You can also download prediction maps from Google Drive.
4. Evaluation
We use public one-key evaluation, which is written in MATLAB code (link).
Please follow this the instructions in ./eval/main.m
and just run it to generate the evaluation results in ./res/
.
Citation
Please cite our paper if you find the work useful, thanks!
@article{song2023fsnet,
title={FSNet: Focus Scanning Network for Camouflaged Object Detection},
author={Song, Ze and Kang, Xudong and Wei, Xiaohui and Liu, Haibo and Dian, Renwei and Li, Shutao},
journal={IEEE Transactions on Image Processing},
year={2023},
publisher={IEEE}
}