Home

Awesome

The repository gives the trained models MoreMNAS A,B,C and D.

How to reproduce & calculate metrics

$ python calculate.py --pb_path ./pretrained_model/MoreMNAS-A.pb
                      --save_path ./result/

Comparison of some state-of-the-art SR Models

MethodMulAddsParamsSet5Set14BSD100Urban100
SRCNN52.7G57K36.66/0.954232.42/0.906331.36/0.887929.50/0.8946
FSRCNN6.0G12K37.00/0.955832.63/0.908831.53/0.892029.88/0.9020
VDSR612.6G665K37.53/0.958733.03/0.912431.90/0.896030.76/0.9140
DRRN6,796.9G297K37.74/0.959133.23/0.913632.05/0.897331.23/0.9188
MoreMNAS-A (ours)238.6G1039K37.63/0.958433.23/0.913831.95/0.896131.24/0.9187
MoreMNAS-B (ours)256.9G1118K37.58/0.958433.22/0.913531.91/0.895931.14/0.9175
MoreMNAS-C (ours)5.5G25K37.06/0.956132.75/0.909431.50/0.890429.92/0.9023
MoreMNAS-D (ours)152.4G664K37.57/0.958433.25/0.914231.94/0.896631.25/0.9191

Qualitative results

Here are some results of MoreMNAS models vs. VDSR on Set 5. The complete result can be generated from the above mentions command.

Comparison with VDSR

Related Work

methodurllanguageOfficial
SRCNNhttp://mmlab.ie.cuhk.edu.hk/projects/SRCNN.htmlMatlab, CaffeYes
FSRCNNhttp://mmlab.ie.cuhk.edu.hk/projects/FSRCNN.htmlMatlab, CaffeYes
DRRNhttps://github.com/tyshiwo/DRRN_CVPR17CaffeYes
VDSRhttps://github.com/twtygqyy/pytorch-vdsrPytorchYes

Citation

Your citation is welcomed!

@article{chu2019multi,
  title={Multi-objective reinforced evolution in mobile neural architecture search},
  author={Chu, Xiangxiang and Zhang, Bo and Xu, Ruijun and Ma, Hailong},
  journal={arXiv preprint arXiv:1901.01074},
  year={2019}
}