Awesome
UMNet
The Pytorch implementation of CVPR2022 paper Multi-Source Uncertainty Mining for Deep Unsupervised Saliency Detection
Trained Model,Test Data and Results
Please download the trained model, test data and SOD results from Baidu Cloud (password: tmzw).
Requirement
• Python 3.7
• PyTorch 1.6.1
• torchvision
• numpy
• Pillow
• Cython
Run
- Please download the trained model and test datasets (including DUTS-TE, OMRON, ECSSD, and HKU-IS). Uncompress and put them in the current file.
- Set the path of testing sets and trained model in config.py. The default setting can be in config.py.
- Run main.py to obtain the predicted saliency maps. The results are saved in the save_path (see config.py). You can also download our saliency results from Baidu Cloud.
- Run compute_score.py to obtain the evaluation scores of the predictions in terms of MAE, Fmax, Sm, and Em. The evaluation codes are referred from https://github.com/Xiaoqi-Zhao-DLUT/GateNet-RGB-Saliency.
- Please be sure that the paths of ground truth and predictions are valid in compute_score.py.
Train
Note: Our method is trained mainly following the same setting of DeepUSPS. We use MSRA-B 2500 training data for network training.
- Four traditional SOD methods including MC, HS, DSR, and RBD are adopted to generate pseudo labels for the training data, which are refined using the first stage of DeepUSPS.
- The four kinds of refined pseudo labels are used for multi-source network learning using our training code (extract code: a4hh).