Awesome
CoNet
Code repository for our paper entilted "Accurate RGB-D Salient Object Detection via Collaborative Learning" accepted at ECCV 2020 (poster).
Overall
CoNet Code
> Requirment
- pytorch 1.0.0+
- torchvision
- PIL
- numpy
> Usage
1. Clone the repo
git clone https://github.com/jiwei0921/CoNet.git
cd CoNet/
2. Train/Test
- test
Our test datasets link and checkpoint link code is 12yn. You need to set dataset path and checkpoint name correctly.
'--phase' as test in demo.py
'--param' as True in demo.py
python demo.py
- train
Our training dataset link code is 203g. You need to set dataset path and checkpoint name correctly.
'--phase' as train in demo.py
'--param' as True or False in demo.py
Note: True means loading checkpoint and False means no loading checkpoint.
python demo.py
> Results
We provide saliency maps (code: qrs2) of our CoNet on 8 datasets (DUT-RGBD, STEREO, NJUD, LFSD, RGBD135, NLPR, SSD, SIP) as well as 2 extended datasets (NJU2k and STERE1000) refer to CPFP_CVPR19.
-
Note: For evaluation, all results are implemented on this ready-to-use toolbox.
> Related RGB-D Saliency Datasets
All common RGB-D Saliency Datasets we collected are shared in ready-to-use manner.
- The web link is here.
If you think this work is helpful, please cite
@InProceedings{Wei_2020_ECCV,
author={Ji, Wei and Li, Jingjing and Zhang, Miao and Piao, Yongri and Lu, Huchuan},
title = {Accurate {RGB-D} Salient Object Detection via Collaborative Learning},
booktitle = {European Conference on Computer Vision},
year = {2020}
}
- For more info about CoNet, please read the Manuscript.
- Thanks for related authors to provide the code or results, particularly, Deng-ping Fan, Hao Chen, Chun-biao Zhu, etc.
Contact Us
More details can be found in Github Wei Ji.
If you have any questions, please contact us ( weiji.dlut@gmail.com ).