Home

Awesome

<div align="center">

PyPI Documentation Status license open issues GitHub pull-requests GitHub latest commit

<!-- [![GitHub contributors](https://img.shields.io/github/contributors/alibaba/EasyCV.svg)](https://GitHub.com/alibaba/EasyCV/graphs/contributors/) --> <!-- [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat-square)](http://makeapullrequest.com) --> </div>

EasyCV

English | 简体中文

Introduction

EasyCV is an all-in-one computer vision toolbox based on PyTorch, mainly focuses on self-supervised learning, transformer based models, and major CV tasks including image classification, metric-learning, object detection, pose estimation, and so on.

Major features

What's New

[🔥 2023.05.09]

[🔥 2023.03.06]

[🔥 2023.01.17]

[🔥 2022.12.02]

[🔥 2022.08.31] We have released our YOLOX-PAI that achieves SOTA results within 40~50 mAP (less than 1ms). And we also provide a convenient and fast export/predictor api for end2end object detection. To get a quick start of YOLOX-PAI, click here!

Please refer to change_log.md for more details and history.

Technical Articles

We have a series of technical articles on the functionalities of EasyCV.

Installation

Please refer to the installation section in quick_start.md for installation.

Get Started

Please refer to quick_start.md for quick start. We also provides tutorials for more usages.

notebook

Model Zoo

<div align="center"> <b>Architectures</b> </div> <table align="center"> <tbody> <tr align="center"> <td> <b>Self-Supervised Learning</b> </td> <td> <b>Image Classification</b> </td> <td> <b>Object Detection</b> </td> <td> <b>Segmentation</b> </td> <td> <b>Object Detection 3D</b> </td> </tr> <tr valign="top"> <td> <ul> <li><a href="configs/selfsup/byol">BYOL (NeurIPS'2020)</a></li> <li><a href="configs/selfsup/dino">DINO (ICCV'2021)</a></li> <li><a href="configs/selfsup/mixco">MiXCo (NeurIPS'2020)</a></li> <li><a href="configs/selfsup/moby">MoBY (ArXiv'2021)</a></li> <li><a href="configs/selfsup/mocov2">MoCov2 (ArXiv'2020)</a></li> <li><a href="configs/selfsup/simclr">SimCLR (ICML'2020)</a></li> <li><a href="configs/selfsup/swav">SwAV (NeurIPS'2020)</a></li> <li><a href="configs/selfsup/mae">MAE (CVPR'2022)</a></li> <li><a href="configs/selfsup/fast_convmae">FastConvMAE (ArXiv'2022)</a></li> </ul> </td> <td> <ul> <li><a href="configs/classification/imagenet/resnet">ResNet (CVPR'2016)</a></li> <li><a href="configs/classification/imagenet/resnext">ResNeXt (CVPR'2017)</a></li> <li><a href="configs/classification/imagenet/hrnet">HRNet (CVPR'2019)</a></li> <li><a href="configs/classification/imagenet/vit">ViT (ICLR'2021)</a></li> <li><a href="configs/classification/imagenet/swint">SwinT (ICCV'2021)</a></li> <li><a href="configs/classification/imagenet/efficientformer">EfficientFormer (ArXiv'2022)</a></li> <li><a href="configs/classification/imagenet/timm/deit">DeiT (ICML'2021)</a></li> <li><a href="configs/classification/imagenet/timm/xcit">XCiT (ArXiv'2021)</a></li> <li><a href="configs/classification/imagenet/timm/tnt">TNT (NeurIPS'2021)</a></li> <li><a href="configs/classification/imagenet/timm/convit">ConViT (ArXiv'2021)</a></li> <li><a href="configs/classification/imagenet/timm/cait">CaiT (ICCV'2021)</a></li> <li><a href="configs/classification/imagenet/timm/levit">LeViT (ICCV'2021)</a></li> <li><a href="configs/classification/imagenet/timm/convnext">ConvNeXt (CVPR'2022)</a></li> <li><a href="configs/classification/imagenet/timm/resmlp">ResMLP (ArXiv'2021)</a></li> <li><a href="configs/classification/imagenet/timm/coat">CoaT (ICCV'2021)</a></li> <li><a href="configs/classification/imagenet/timm/convmixer">ConvMixer (ICLR'2022)</a></li> <li><a href="configs/classification/imagenet/timm/mlp-mixer">MLP-Mixer (ArXiv'2021)</a></li> <li><a href="configs/classification/imagenet/timm/nest">NesT (AAAI'2022)</a></li> <li><a href="configs/classification/imagenet/timm/pit">PiT (ArXiv'2021)</a></li> <li><a href="configs/classification/imagenet/timm/twins">Twins (NeurIPS'2021)</a></li> <li><a href="configs/classification/imagenet/timm/shuffle_transformer">Shuffle Transformer (ArXiv'2021)</a></li> <li><a href="configs/classification/imagenet/deitiii">DeiT III (ECCV'2022)</a></li> <li><a href="configs/classification/imagenet/deit">Hydra Attention (2022)</a></li> </ul> </td> <td> <ul> <li><a href="configs/detection/fcos">FCOS (ICCV'2019)</a></li> <li><a href="configs/detection/yolox">YOLOX (ArXiv'2021)</a></li> <li><a href="configs/detection/yolox">YOLOX-PAI (ArXiv'2022)</a></li> <li><a href="configs/detection/detr">DETR (ECCV'2020)</a></li> <li><a href="configs/detection/dab_detr">DAB-DETR (ICLR'2022)</a></li> <li><a href="configs/detection/dab_detr">DN-DETR (CVPR'2022)</a></li> <li><a href="configs/detection/dino">DINO (ArXiv'2022)</a></li> </ul> </td> <td> </ul> <li><b>Instance Segmentation</b></li> <ul> <ul> <li><a href="configs/detection/mask_rcnn">Mask R-CNN (ICCV'2017)</a></li> <li><a href="configs/detection/vitdet">ViTDet (ArXiv'2022)</a></li> <li><a href="configs/segmentation/mask2former">Mask2Former (CVPR'2022)</a></li> </ul> </ul> </ul> <li><b>Semantic Segmentation</b></li> <ul> <ul> <li><a href="configs/segmentation/fcn">FCN (CVPR'2015)</a></li> <li><a href="configs/segmentation/upernet">UperNet (ECCV'2018)</a></li> </ul> </ul> </ul> <li><b>Panoptic Segmentation</b></li> <ul> <ul> <li><a href="configs/segmentation/mask2former">Mask2Former (CVPR'2022)</a></li> </ul> </ul> </ul> </td> <td> <ul> <li><a href="configs/detection3d/bevformer">BEVFormer (ECCV'2022)</a></li> </ul> </td> </tr> </td> </tr> </tbody> </table>

Please refer to the following model zoo for more details.

Data Hub

EasyCV have collected dataset info for different scenarios, making it easy for users to finetune or evaluate models in EasyCV model zoo.

Please refer to data_hub.md.

License

This project is licensed under the Apache License (Version 2.0). This toolkit also contains various third-party components and some code modified from other repos under other open source licenses. See the NOTICE file for more information.

Contact

This repo is currently maintained by PAI-CV team, you can contact us by

Enterprise Service

If you need EasyCV enterprise service support, or purchase cloud product services, you can contact us by DingDing Group.

dingding_qrcode