Home

Awesome

VGGNets for Scene Recognition

Here we release our trained VGGNet models on the large-scale Places205 dataset, called Places205-VGGNet models, from the following report:

http://arxiv.org/abs/1508.01667

Places205-VGGNet Models for Scene Recognition
Limin Wang, Sheng Guo, Weilin Huang, and Yu Qiao, in arXive 1508.01667, 2015

Performance on the Places205 dataset

Modeltop-1 val/testtop-5 val/test
Places205-VGGNet-1158.6/59.087.6/87.6
Places205-VGGNet-1360.2/60.188.1/88.5
Places205-VGGNet-1660.6/60.388.5/88.8
Places205-VGGNet-1961.3/61.288.8/89.3

We use 5 crops and their horizontal flippings of each image for testing.

Performance on the MIT67 and SUN397 dataset

ModelMIT67SUN397
Places205-VGGNet-1182.065.3
Places205-VGGNet-1381.966.7
Places205-VGGNet-1681.266.9

We extract the fc6-layer features of our trained Places205-VGGNet models, which are further normalized by L2-norm.

Download

These models are relased for non-conmercial use. If you use these models in your research, thanks to cite our above report.

Multi-GPU Implementation

In order to speed up the training procedure of VGGNets, we use a Multi-GPU extension of Caffe toolbox:

https://github.com/yjxiong/caffe/tree/action_recog

Meanwhile, we add the strategies of multi-scale cropping and corner cropping provided by this extension, which has been proved to be effective for action recognition in videos.

Questions

Contact