Home

Awesome

Next-ViT

This repo is the official implementation of "Next-ViT: Next Generation Vision Transformer for Efficient Deployment in Realistic Industrial Scenarios". This algorithm is proposed by ByteDance, Intelligent Creation, AutoML Team (字节跳动-智能创作 AutoML团队).

Updates

08/16/2022

  1. Pretrained models on large scale dataset follow [SSLD] are provided.
  2. Segmentation results with large scale dataset pretrained model are also presented.

Overview

<div style="text-align: center"> <img src="images/structure.png" title="Next-ViT-S" height="75%" width="75%"> </div> Figure 1. The overall hierarchical architecture of Next-ViT.</center>

Introduction

Due to the complex attention mechanisms and model design, most existing vision Transformers (ViTs) can not perform as efficiently as convolutional neural networks (CNNs) in realistic industrial deployment scenarios, e.g. TensorRT and CoreML. This poses a distinct challenge: Can a visual neural network be designed to infer as fast as CNNs and perform as powerful as ViTs? Recent works have tried to design CNN-Transformer hybrid architectures to address this issue, yet the overall performance of these works is far away from satisfactory. To end these, we propose a next generation vision Transformer for efficient deployment in realistic industrial scenarios, namely Next-ViT, which dominates both CNNs and ViTs from the perspective of latency/accuracy trade-off. In this work, the Next Convolution Block (NCB) and Next Transformer Block (NTB) are respectively developed to capture local and global information with deployment-friendly mechanisms. Then, Next Hybrid Strategy (NHS) is designed to stack NCB and NTB in an efficient hybrid paradigm, which boosts performance in various downstream tasks. Extensive experiments show that Next-ViT significantly outperforms existing CNNs, ViTs and CNN-Transformer hybrid architectures with respect to the latency/accuracy trade-off across various vision tasks. On TensorRT, Next-ViT surpasses ResNet by 5.5 mAP (from 40.4 to 45.9) on COCO detection and 7.7% mIoU (from 38.8% to 46.5%) on ADE20K segmentation under similar latency. Meanwhile, it achieves comparable performance with CSWin, while the inference speed is accelerated by 3.6×. On CoreML, Next-ViT surpasses EfficientFormer by 4.6 mAP (from 42.6 to 47.2) on COCO detection and 3.5% mIoU (from 45.1% to 48.6%) on ADE20K segmentation under similar latency. Next-ViT-R

<center>Figure 2. Comparison among Next-ViT and efficient Networks, in terms of accuracy-latency trade-off.</center>

Usage

First, clone the repository locally:

git clone https://github.com/bytedance/Next-ViT.git

Then, install torch=1.10.0, mmcv-full==1.5.0, timm==0.4.9 and etc.

pip3 install -r requirements.txt

Data preparation

Download and extract ImageNet train and val images from http://image-net.org/. The directory structure is the standard layout for the torchvision datasets.ImageFolder, and the training and validation data is expected to be in the train/ folder and val/ folder respectively:

/path/to/imagenet/
  train/
    class1/
      img1.jpeg
    class2/
      img2.jpeg
  val/
    class1/
      img3.jpeg
    class/2
      img4.jpeg

Image Classification

We provide a series of Next-ViT models pretrained on ILSVRC2012 ImageNet-1K dataset. More details can be seen in [paper].

ModelDatasetResolutionFLOPs(G)Params (M)TensorRT <br/>Latency(ms)CoreML <br/>Latency(ms)Acc@1ckptlog
Next-ViT-SImageNet-1K2245.831.77.73.582.5ckptlog
Next-ViT-BImageNet-1K2248.344.810.54.583.2ckptlog
Next-ViT-LImageNet-1K22410.857.813.05.583.6ckptlog
Next-ViT-SImageNet-1K38417.331.721.68.983.6ckptlog
Next-ViT-BImageNet-1K38424.644.829.612.484.3ckptlog
Next-ViT-LImageNet-1K38432.057.836.015.284.7ckptlog

We also provide a series of Next-ViT models pretrained on large scale dataset follow [SSLD]. More details can be seen in [paper].

ModelDatasetResolutionFLOPs(G)Params (M)TensorRT <br/>Latency(ms)CoreML <br/>Latency(ms)Acc@1ckpt
Next-ViT-SImageNet-1K-6M2245.831.77.73.584.8ckpt
Next-ViT-BImageNet-1K-6M2248.344.810.54.585.1ckpt
Next-ViT-LImageNet-1K-6M22410.857.813.05.585.4ckpt
Next-ViT-SImageNet-1K-6M38417.331.721.68.985.8ckpt
Next-ViT-BImageNet-1K-6M38424.644.829.612.486.1ckpt
Next-ViT-LImageNet-1K-6M38432.057.836.015.286.4ckpt

Training

To train Next-ViT-S on ImageNet using 8 gpus for 300 epochs, run:

cd classification/
bash train.sh 8 --model nextvit_small --batch-size 256 --lr 5e-4 --warmup-epochs 20 --weight-decay 0.1 --data-path your_imagenet_path

Finetune Next-ViT-S with 384x384 input size for 30 epochs, run:

cd classification/
bash train.sh 8 --model nextvit_small --batch-size 128 --lr 5e-6 --warmup-epochs 0 --weight-decay 1e-8 --epochs 30 --sched step --decay-epochs 60 --input-size 384 --resume ../checkpoints/nextvit_small_in1k_224.pth --finetune --data-path your_imagenet_path 

Evaluation

To evaluate the performance of Next-ViT-S on ImageNet using 8 gpus, run:

cd classification/
bash train.sh 8 --model nextvit_small --batch-size 256 --lr 5e-4 --warmup-epochs 20 --weight-decay 0.1 --data-path your_imagenet_path --resume ../checkpoints/nextvit_small_in1k_224.pth --eval

Detection

Our code is based on mmdetection, please install mmdetection==2.23.0. Next-ViT serve as the strong backbones for Mask R-CNN. It's easy to apply Next-ViT in other detectors provided by mmdetection based on our examples. More details can be seen in [paper].

Mask R-CNN

BackbonePretrainedLr SchdParam.(M)FLOPs(G)TensorRT <br/>Latency(ms)CoreML <br/>Latency(ms)bbox mAPmask mAPckptlog
Next-ViT-SImageNet-1K1x51.829038.218.145.941.8ckptlog
Next-ViT-SImageNet-1K3x51.829038.218.148.043.2ckptlog
Next-ViT-BImageNet-1K1x64.934051.624.447.242.8ckptlog
Next-ViT-BImageNet-1K3x64.934051.624.449.544.4ckptlog
Next-ViT-LImageNet-1K1x77.939165.330.148.043.2ckptlog
Next-ViT-LImageNet-1K3x77.939165.330.150.244.8ckptlog

Training

To train Mask R-CNN with Next-ViT-S backbone using 8 gpus, run:

cd detection/
PORT=29501 bash dist_train.sh configs/mask_rcnn_nextvit_small_1x.py 8

Evaluation

To evaluate Mask R-CNN with Next-ViT-S backbone using 8 gpus, run:

cd detection/
PORT=29501 bash dist_test.sh configs/mask_rcnn_nextvit_small_1x.py ../checkpoints/mask_rcnn_1x_nextvit_small.pth 8 --eval bbox

Semantic Segmentation

Our code is based on mmsegmentation, please install mmsegmentation==0.23.0. Next-ViT serve as the strong backbones for segmentation tasks on ADE20K dataset. It's easy to extend it to other datasets and segmentation methods. More details can be seen in [paper].

Semantic FPN 80k

BackbonePretrainedFLOPs(G)Params (M)TensorRT <br/>Latency(ms)CoreML <br/>Latency(ms)mIoUckptlog
Next-ViT-SImageNet-1K20836.338.218.146.5ckptlog
Next-ViT-BImageNet-1K26049.351.624.448.6ckptlog
Next-ViT-LImageNet-1K33162.465.330.149.1ckptlog
Next-ViT-SImageNet-1K-6M20836.338.218.148.8ckptlog
Next-ViT-BImageNet-1K-6M26049.351.624.450.2ckptlog
Next-ViT-LImageNet-1K-6M33162.465.330.150.5ckptlog

UperNet 160k

BackbonePretrainedFLOPs(G)Params (M)TensorRT <br/>Latency(ms)CoreML <br/>Latency(ms)mIoU(ss/ms)ckptlog
Next-ViT-SImageNet-1K96866.338.218.148.1/49.0ckptlog
Next-ViT-BImageNet-1K102079.351.624.450.4/51.1ckptlog
Next-ViT-LImageNet-1K107292.465.330.150.1/50.8ckptlog
Next-ViT-SImageNet-1K-6M96866.338.218.149.8/50.8ckptlog
Next-ViT-BImageNet-1K-6M102079.351.624.451.8/52.8ckptlog
Next-ViT-LImageNet-1K-6M107292.465.330.151.5/52.0ckptlog

Training

To train Semantic FPN 80k with Next-ViT-S backbone using 8 gpus, run:

cd segmentation/
PORT=29501 bash dist_train.sh configs/fpn_512_nextvit_small_80k.py 8

Evaluation

To evaluate Semantic FPN 80k(single scale) with Next-ViT-S backbone using 8 gpus, run:

cd segmentation/
PORT=29501 bash dist_test.sh configs/fpn_512_nextvit_small_80k.py ../checkpoints/fpn_80k_nextvit_small.pth 8 --eval mIoU

Deployment and Latency Measurement

we provide scripts to convert Next-ViT from pytorch model to CoreML model and TensorRT engine.

CoreML

Convert Next-ViT-S to CoreML model with coremltools==5.2.0, run:

cd deployment/
python3 export_coreml_model.py --model nextvit_small --batch-size 1 --image-size 224
BackboneResolutionFLOPs (G)CoreML <br/>Latency(ms)CoreML Model
Next-ViT-S2245.83.5mlmodel
Next-ViT-B2248.34.5mlmodel
Next-ViT-L22410.85.5mlmodel

We uniformly benchmark CoreML Latency on an iPhone12 Pro Max(iOS 16.0) with Xcode 14.0. The performance report of CoreML model can be generated with Xcode 14.0 directly(new feature of Xcode 14.0).
Next-ViT-R

<center>Figure 3. CoreML latency of Next-ViT-S/B/L.</center>

TensorRT

Convert Next-ViT-S to TensorRT engine with tensorrt==8.0.3.4, run:

cd deployment/
python3 export_tensorrt_engine.py --model nextvit_small --batch-size 8  --image-size 224 --datatype fp16 --profile True --trtexec-path /usr/bin/trtexec

Citation

If you find this project useful in your research, please consider cite:

@article{li2022next,
  title={Next-ViT: Next Generation Vision Transformer for Efficient Deployment in Realistic Industrial Scenarios},
  author={Li, Jiashi and Xia, Xin and Li, Wei and Li, Huixia and Wang, Xing and Xiao, Xuefeng and Wang, Rui and Zheng, Min and Pan, Xin},
  journal={arXiv preprint arXiv:2207.05501},
  year={2022}
}

Acknowledgement

We heavily borrow the code from Twins.

License

This repository is released under the Apache 2.0 license as found in the LICENSE file.