Awesome
Depth-Aware Concealed Crop Detection in Dense Agricultural Scenes (CVPR2024)
Overview
- Our ACOD-12K
- Our RISNet
Usage
The training and testing experiments are conducted using PyTorch with a single RTX 3090 GPU of 24 GB Memory.
Dependencies
- Creating a virtual environment in terminal:
conda create -n RISNet python=3.8
- Installing necessary packages:
pip install -r requirements.txt
Datasets
Our ACOD-12K can be found in Huggingface or Baidu Drive(0vy7)
Training
-
The pretrained model is stored in Google Drive and Baidu Drive(51sr). After downloading, please change the file path in the corresponding code.
-
You can use our default configuration to train your own RISNet, like this:
python Train.py --epoch 100 --lr 1e-4 --batchsize 4 --trainsize 704 --train_path Your_dataset_path --save_path Your_save_path
Testing
- Our well-trained model is stored in Google Drive and Baidu Drive(4sgg). After downloading, please change the file path in the corresponding code.
- You can use our default configuration to generate the final prediction map, like this:
python Test.py --testsize 704 --pth_path Your_checkpoint_path --test_path Your_dataset_path
Evaluation
- Matlab version: https://github.com/DengPingFan/CODToolbox
- Python version: https://github.com/lartpang/PySODMetrics
Results
Concealed Crop Detection(CCD)
- Results of our RISNet can be found in Google Drive and Baidu Drive(te1v).
Concealed Object Detection(COD)
- Results of our RISNet can be found in Google Drive and Baidu Drive(85oa).
- Our well-trained model can be found in Google Drive and Baidu Drive(9jw8).
Citation
If you find RISNet useful for your research and applications, please cite using this BibTeX:
@inproceedings{wang2024depth,
title={Depth-Aware Concealed Crop Detection in Dense Agricultural Scenes},
author={Wang, Liqiong and Yang, Jinyu and Zhang, Yanfu and Wang, Fangyi and Zheng, Feng},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={17201--17211},
year={2024}
}