Home

Awesome

SqueezeNext: Hardware-Aware Neural Network Design

Introduction

The codes are PyTorch re-implement version for paper: SqueezeNext: Hardware-Aware Neural Network Design. (SqueezeNext)

Gholami A, Kwon K, Wu B, et al. SqueezeNext: Hardware-Aware Neural Network Design[J]. 2018. arXiv:1803.10615v1

We implement this work from amirgholami/SqueezeNext.

Structure

Here, we use a variation of the latter approach by using a two stage squeeze layer. In each SqueezeNext block, we use two bottleneck modules each reducing the channel size by a factor of 2, which is followed by two separable convolutions. We also incorporate a final 1 × 1 expansion module, which further reduces the number of output channels for the separable convolutions.

Block

SqNxt

Requirements

Results

We just test four models in three datasets: Cifar10, Cifar100 and Tiny ImageNet

Cifar-10

Modelstrain(Top-1)validation(Top-1)widthdepth
SqNxt_23_1x98.791.91.0x23
SqNxt_23_2x99.993.12.0x23
SqNxt_23_1x_v599.491.91.0x23
SqNxt_23_2x_v599.893.12.0x23

Cifar-100

Modelstrain(Top-1)validation(Top-1)widthdepth
SqNxt_23_1x94.169.31.0x23
SqNxt_23_2x99.773.12.0x23
SqNxt_23_1x_v594.770.11.0x23
SqNxt_23_2x_v599.873.22.0x23

Tiny ImageNet

Modelstrain(Top-1)validation(Top-1)widthdepth
SqNxt_23_1x71.153.51.0x23
SqNxt_23_2x77.256.72.0x23
SqNxt_23_1x_v570.952.71.0x23
SqNxt_23_2x_v572.456.72.0x23