Home

Awesome

Efficient networks for Computer Vision

This repo contains source code of our work on designing efficient networks for different computer vision tasks: <span style="color:blue"> (1) Image classification, (2) Object detection, and (3) Semantic segmentation.</span>

<table> <tr> <td colspan=2 align="center"><b>Real-time semantic segmentation using ESPNetv2 on iPhone7. See <a href="https://github.com/sacmehta/ESPNetv2-COREML" target="_blank">here</a> for iOS application source code using COREML.<b></td> </tr> <tr> <td> <img src="images/espnetv2_iphone7_video_1.gif" alt="Seg demo on iPhone7"></img> </td> <td> <img src="images/espnetv2_iphone7_video_2.gif" alt="Seg demo on iPhone7"></img> </td> </tr> </table> <table> <tr> <td colspan=2 align="center"><b>Real-time object detection using ESPNetv2<b></td> </tr> <tr> <td colspan=2 align="center"> <img src="images/espnetv2_detection_2.gif" alt="Demo 1"></img> </td> </tr> <tr> <td> <img src="images/espnetv2_detection_1.gif" alt="Demo 2"></img> </td> <td> <img src="images/espnetv2_detection_3.gif" alt="Demo 3"></img> </td> </tr> </table>

Table of contents

  1. Key highlihgts
  2. Supported networks
  3. Relevant papers
  4. Blogs
  5. Performance comparison
  6. Training receipe
  7. Instructions for segmentation and detection demos
  8. Citation
  9. License
  10. Acknowledgements
  11. Contributions
  12. Notes

Key highlights

Supported networks

This repo supports following networks:

Relevant papers

Blogs

Performance comparison

ImageNet

Below figure compares the performance of DiCENet with other efficient networks on the ImageNet dataset. DiCENet outperforms all existing efficient networks, including MobileNetv2 and ShuffleNetv2. More details here

DiCENet performance on the ImageNet

Object detection

Below table compares the performance of our architecture with other detection networks on the MS-COCO dataset. Our network is fast and accurate. More details here

<table> <tr> <td></td> <td colspan=3 align="center"> <b>MSCOCO</b></td> </tr> <tr> <td></td> <td align="center"> <b>Image Size</b> </td> <td align="center"> <b>FLOPs</b> </td> <td align="center"> <b>mIOU</b> </td> <td align="center"> <b>FPS</b> </td> </tr> <tr> <td> SSD-VGG</td> <td align="center"> 512x512 </td> <td align="center"> 100 B</td> <td align="center"> 26.8 </td> <td align="center"> 19 </td> </tr> <tr> <td> YOLOv2</td> <td align="center"> 544x544 </td> <td align="center"> 17.5 B</td> <td align="center"> 21.6 </td> <td align="center"> 40 </td> </tr> <tr> <td> ESPNetv2-SSD (Ours) </td> <td align="center"> 512x512 </td> <td align="center"> 3.2 B</td> <td align="center"> 24.54 </td> <td align="center"> 35 </td> </tr> </table>

Semantic Segmentation

Below figure compares the performance of ESPNet and ESPNetv2 on two different datasets. Note that ESPNets are one of the first efficient networks that delivers competitive performance to existing networks on the PASCAL VOC dataset, even with low resolution images say 256x256. See here for more details.

<table> <tr> <td></td> <td colspan=3 align="center"> <b>Cityscapes</b></td> <td colspan=3 align="center"> <b>PASCAL VOC 2012</b> </td> </tr> <tr> <td></td> <td align="center"> <b>Image Size</b> </td> <td align="center"> <b>FLOPs</b> </td> <td align="center"> <b>mIOU</b> </td> <td align="center"> <b>Image</b> Size </td> <td align="center"> <b>FLOPs</b></td> <td align="center"> <b>mIOU</b> </td> </tr> <tr> <td> ESPNet</td> <td align="center"> 1024x512 </td> <td align="center"> 4.5 B</td> <td align="center"> 60.3 </td> <td align="center"> 512x512 </td> <td align="center"> 2.2 B</td> <td align="center"> 63 </td> </tr> <tr> <td> ESPNetv2</td> <td align="center"> 1024x512 </td> <td align="center"> 2.7 B</td> <td align="center"> <b>66.2</b> </td> <td align="center"> 384x384 </td> <td align="center"> 0.76 B</td> <td align="center"> <b>68</b> </td> </tr> </table>

Training Receipe

Image Classification

Details about training and testing are provided here.

Details about performance of different models are provided here.

Semantic segmentation

Details about training and testing are provided here.

Details about performance of different models are provided here.

Object Detection

Details about training and testing are provided here.

Details about performance of different models are provided here.

Instructions for segmentation and detection demos

To run the segmentation demo, just type:

python segmentation_demo.py

To run the detection demo, run the following command:

python detection_demo.py

OR 

python detection_demo.py --live

For other supported arguments, please see the corresponding files.

Citation

If you find this repository helpful, please feel free to cite our work:

@article{mehta2019dicenet,
Author = {Sachin Mehta and Hannaneh Hajishirzi and Mohammad Rastegari},
Title = {DiCENet: Dimension-wise Convolutions for Efficient Networks},
Year = {2020},
journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
}

@inproceedings{mehta2018espnetv2,
  title={ESPNetv2: A Light-weight, Power Efficient, and General Purpose Convolutional Neural Network},
  author={Mehta, Sachin and Rastegari, Mohammad and Shapiro, Linda and Hajishirzi, Hannaneh},
  booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
  year={2019}
}

@inproceedings{mehta2018espnet,
  title={Espnet: Efficient spatial pyramid of dilated convolutions for semantic segmentation},
  author={Mehta, Sachin and Rastegari, Mohammad and Caspi, Anat and Shapiro, Linda and Hajishirzi, Hannaneh},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  pages={552--568},
  year={2018}
}

License

By downloading this software, you acknowledge that you agree to the terms and conditions given here.

Acknowledgements

Most of our object detection code is adapted from SSD in pytorch. We thank authors for such an amazing work.

Want to help out?

Thanks for your interest in our work :).

Open tasks that are interesting:

Notes

Notes about DiCENet paper

This repository contains DiCENet's source code in PyTorch only and you should be able to reproduce the results of v1/v2 of our arxiv paper. To reproduce the results of our T-PAMI paper, you need to incorporate MobileNet tricks in Section 5.3, which are currently not a part of this repository.