Home

Awesome

Tiny Machine Learning [website]

[News] We refactored MCUNet into a standalone repo: https://github.com/mit-han-lab/mcunet. Please follow the new repo for updates on TinyEngine release!

[News] We actively collaborate with industrial partners for real-world TinyML applications. Our technolgy has successfully influenced many products and deployed on over 100K IoT devices. Feel free to contact Prof. Song Han for more info.

[News] Our projects are covered by: MIT News, WIRED, Morning Brew, Stacey on IoT, Analytics Insight, Techable.

TinyML Projects

ProjectsKeywords
MCUNetMemory-efficient inference, System-algorithm co-design
TinyTLOn-device learning, Memory-efficient transfer learning
NetAugTraining technique for tiny neural networks

About TinyML

Intelligent edge devices with rich sensors (e.g., billions of mobile phones and IoT devices) have been ubiquitous in our daily lives. Combining artificial intelligence (AI) and these edge devices, there are vast real-world applications such as smart home, smart retail, autonomous driving, and so on. However, the state-of-the-art deep learning AI systems typically require tremendous resources (e.g., large labeled dataset, many computational resources, many AI experts), both for training and inference. This hinders the application of these powerful deep learning AI systems on edge devices. The TinyML project aims to improve the efficiency of deep learning AI systems by requiring less computation, fewer engineers, and less data, to facilitate the giant market of edge AI and AIoT.

<p align="center"> <img src="https://hanlab.mit.edu/projects/tinyml/figures/background1.png" width="100%" /> </p> <p align="center"> <img src="https://hanlab.mit.edu/projects/tinyml/figures/background2.png" width="100%" /> </p>

Demo

Watch the video

Related Projects

MCUNet: Tiny Deep Learning on IoT Devices (NeurIPS'20, spotlight)

TinyTL: Reduce Memory, Not Parameters for Efficient On-Device Learning (NeurIPS'20)

Once for All: Train One Network and Specialize it for Efficient Deployment (ICLR'20)

ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware (ICLR'19)

AutoML for Architecting Efficient and Specialized Neural Networks (IEEE Micro)

AMC: AutoML for Model Compression and Acceleration on Mobile Devices (ECCV'18)

HAQ: Hardware-Aware Automated Quantization (CVPR'19, oral)