Home

Awesome

SparseNet

Sparsely Aggregated Convolutional Networks [PDF]

Ligeng Zhu, Ruizhi Deng, Michael Maire, Zhiwei Deng, Greg Mori, Ping Tan

What is SparseNet?

SparseNet is a network architecture that only aggregates previous layers with exponential offset, for example, i - 1, i - 2, i - 4, i - 8, i - 16 ...

Why use SparseNet?

The connectivity pattern yields state-of-the-art arruacies on small dataset CIFAR/10/100. On large scale ILSVRC 2012 (ImageNet) dataset, SparseNet achieves similar accuracy as ResNet and DenseNet, while only using much less parameters.

Better Performance

<table> <tr><th> Without BC </th><th> With BC </th></tr> <tr><td>
ArchitectureParamsCIFAR 100
DenseNet-40-121.1M24.79
DenseNet-100-127.2M20.97
DenseNet-100-2428.28M19.61
---------
SparseNet-40-240.76M24.65
SparseNet-100-365.65M20.50
SparseNet-100-{16,32,64}7.22M19.49
</td><td>
ArchitectureParamsCIFAR 100
DenseNet-100-120.8M22.62
DenseNet-250-2415.3M17,6
DenseNet-190-4025.6M17.53
---------
SparseNet-100-241.46M22.12
SparseNet-100-{16,32,64}4.38M19.71
SparseNet-100-{32,64,128}16.72M17.71
</td></tr> </table>

Efficient Parameter Utilization

Pretrained model

Refer for source folder.

Cite

If SparseNet helps your research, please cite our work :)

@article{DBLP:journals/corr/abs-1801-05895,
  author    = {Ligeng Zhu and
               Ruizhi Deng and
               Michael Maire and
               Zhiwei Deng and
               Greg Mori and
               Ping Tan},
  title     = {Sparsely Aggregated Convolutional Networks},
  journal   = {CoRR},
  volume    = {abs/1801.05895},
  year      = {2018},
  url       = {http://arxiv.org/abs/1801.05895},
  archivePrefix = {arXiv},
  eprint    = {1801.05895},
  biburl    = {https://dblp.org/rec/bib/journals/corr/abs-1801-05895},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}