Home

Awesome

MLSP feature learning on AVA

This is part of the code for the paper "Effective Aesthetics Prediction with Multi-level Spatially Pooled Features". Please cite the following paper if you use the code:

@inproceedings{hosu2019effective,
  title={Effective Aesthetics Prediction with Multi-level Spatially Pooled Features},
  author={Hosu, Vlad and Goldlucke, Bastian and Saupe, Dietmar},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={9375--9383},
  year={2019}}

Multi-level Spatially-Pooled (MLSP) features extracted from ImageNet pre-trained Inception-type networks are used to train aesthetics score (MOS) predictors on the Aesthetic Visual Analysis (AVA) database. The code shows how to train models based on both narrow and wide MLSP features. Several fully trained models are included, together with demos on how to apply them on new images. The models are stored with git LFS, and they can be downloaded from here as well. The included notebooks rely on the kutils library.

Deployment

There are several options to run the code:

  1. Create your own Python environment:
Python 2.7.16
tensorflow-gpu 1.14.0
keras-gpu 2.2.4
  1. Run it on google colab, see example notebook (py3) for prediction.

  2. Deploy the code via the Dockerimage in jupyter-data-science, with the Python 2 environment.

Overview

The following files are included:

extract_mlsp.ipynb:

train_mlsp_narrow.ipynb, train_mlsp_narrow_aug.ipynb

train_mlsp_wide.ipynb

predict_mlsp_wide.ipynb (open on google colab (py3))

metadata/AVA_data_official_test.csv