Awesome
CVNets: A library for training computer vision networks
CVNets is a computer vision toolkit that allows researchers and engineers to train standard and novel mobile- and non-mobile computer vision models for variety of tasks, including object classification, object detection, semantic segmentation, and foundation models (e.g., CLIP).
Table of contents
- What's new?
- Installation
- Getting started
- Supported models and tasks
- Maintainers
- Research effort at Apple using CVNets
- Contributing to CVNets
- License
- Citation
What's new?
- July 2023: Version 0.4 of the CVNets library includes
- Bytes Are All You Need: Transformers Operating Directly On File Bytes
- RangeAugment: Efficient online augmentation with Range Learning
- Training and evaluating foundation models (CLIP)
- Mask R-CNN
- EfficientNet, Swin Transformer, and ViT
- Enhanced distillation support
Installation
We recommend to use Python 3.10+ and PyTorch (version >= v1.12.0)
Instructions below use Conda, if you don't have Conda installed, you can check out How to Install Conda.
# Clone the repo
git clone git@github.com:apple/ml-cvnets.git
cd ml-cvnets
# Create a virtual env. We use Conda
conda create -n cvnets python=3.10.8
conda activate cvnets
# install requirements and CVNets package
pip install -r requirements.txt -c constraints.txt
pip install --editable .
Getting started
- General instructions for working with CVNets are given here.
- Examples for training and evaluating models are provided here and here.
- Examples for converting a PyTorch model to CoreML are provided here.
Supported models and Tasks
To see a list of available models and benchmarks, please refer to Model Zoo and examples folder.
<details> <summary> ImageNet classification models </summary>- CNNs
- Transformers
- Soft distillation
- Hard distillation
Maintainers
This code is developed by <a href="https://sacmehta.github.io" target="_blank">Sachin</a>, and is now maintained by Sachin, <a href="https://mchorton.com" target="_blank">Maxwell Horton</a>, <a href="https://www.mohammad.pro" target="_blank">Mohammad Sekhavat</a>, and Yanzi Jin.
Previous Maintainers
- <a href="https://farzadab.github.io" target="_blank">Farzad</a>
Research effort at Apple using CVNets
Below is the list of publications from Apple that uses CVNets:
- MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer, ICLR'22
- CVNets: High performance library for Computer Vision, ACM MM'22
- Separable Self-attention for Mobile Vision Transformers (MobileViTv2)
- RangeAugment: Efficient Online Augmentation with Range Learning
- Bytes Are All You Need: Transformers Operating Directly on File Bytes
Contributing to CVNets
We welcome PRs from the community! You can find information about contributing to CVNets in our contributing document.
Please remember to follow our Code of Conduct.
License
For license details, see LICENSE.
Citation
If you find our work useful, please cite the following paper:
@inproceedings{mehta2022mobilevit,
title={MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer},
author={Sachin Mehta and Mohammad Rastegari},
booktitle={International Conference on Learning Representations},
year={2022}
}
@inproceedings{mehta2022cvnets,
author = {Mehta, Sachin and Abdolhosseini, Farzad and Rastegari, Mohammad},
title = {CVNets: High Performance Library for Computer Vision},
year = {2022},
booktitle = {Proceedings of the 30th ACM International Conference on Multimedia},
series = {MM '22}
}