Awesome
MEANet
Pytorch implementation for MEANet: Multi-Modal Edge-Aware Network for Light Field Salient Object Detection.
Requirements
- Python 3.6 <br>
- Torch 1.10.2 <br>
- Torchvision 0.4.0 <br>
- Cuda 10.0 <br>
- Tensorboard 2.7.0
Usage
To Train
- Download the training dataset and modify the 'train_data_path'.
- Start to train with
python -m torch.distributed.launch --nproc_per_node=4 train.py
To Test
- Download the testing dataset and have it in the 'dataset/test/' folder.
- Download the already-trained MEANet model and have it in the 'trained_weight/' folder.
- Change the
weight_name
intest.py
to the model to be evaluated. - Start to test with
python test.py
Download
Trained model for testing
We released two versions of the trained model:
Trained with additional 100 samples from HFUT-Lytro on baidu pan with fetch code: 0o0r or on Google drive
Trained only with DUTLF-FS on baidu pan with fetch code: 75bn or on Google drive
Saliency map
We released two versions of the saliency map:
Trained with additional 100 samples from HFUT-Lytro on baidu pan with fetch code: x7xa or on Google drive
Trained only with DUTLF-FS on baidu pan with fetch code: s7vn or on Google drive
Citation
Please cite our paper if you find the work useful:
@article{JIANG202278,
title = {MEANet: Multi-modal edge-aware network for light field salient object detection},
journal = {Neurocomputing},
volume = {491},
pages = {78-90},
year = {2022},
author = {Yao Jiang and Wenbo Zhang and Keren Fu and Qijun Zhao}
}