Home

Awesome

Introduction

This repository is the official implementation of Contextual Transformer Networks for Visual Recognition.

CoT is a unified self-attention building block, and acts as an alternative to standard convolutions in ConvNet. As a result, it is feasible to replace convolutions with their CoT counterparts for strengthening vision backbones with contextualized self-attention.

<p align="center"> <img src="images/framework.jpg" width="800"/> </p>

2021/3/25-2021/6/5: CVPR 2021 Open World Image Classification Challenge

Rank 1 in Open World Image Classification Challenge @ CVPR 2021. (Team name: VARMS)

Usage

The code is mainly based on timm.

Requirement:

Clone the repository:

git clone https://github.com/JDAI-CV/CoTNet.git

Train

First, download the ImageNet dataset. To train CoTNet-50 on ImageNet on a single node with 8 gpus for 350 epochs run:

python -m torch.distributed.launch --nproc_per_node=8 train.py --folder ./experiments/cot_experiments/CoTNet-50-350epoch

The training scripts for CoTNet (e.g., CoTNet-50) can be found in the cot_experiments folder.

Inference Time vs. Accuracy

CoTNet models consistently obtain better top-1 accuracy with less inference time than other vision backbones across both default and advanced training setups. In a word, CoTNet models seek better inference time-accuracy trade-offs than existing vision backbones.

<p align="center"> <img src="images/inference_time.jpg" width="800"/> </p>

Results on ImageNet

nameresolution#paramsFLOPsTop-1 Acc.Top-5 Acc.model
CoTNet-5022422.2M3.381.395.6GoogleDrive / Baidu
CoTNeXt-5022430.1M4.382.195.9GoogleDrive / Baidu
SE-CoTNetD-5022423.1M4.181.695.8GoogleDrive / Baidu
CoTNet-10122438.3M6.182.896.2GoogleDrive / Baidu
CoTNeXt-10122453.4M8.283.296.4GoogleDrive / Baidu
SE-CoTNetD-10122440.9M8.583.296.5GoogleDrive / Baidu
SE-CoTNetD-15222455.8M17.084.097.0GoogleDrive / Baidu
SE-CoTNetD-15232055.8M26.584.697.1GoogleDrive / Baidu

Access code for Baidu is cotn

CoTNet on downstream tasks

For Object Detection and Instance Segmentation, please see CoTNet for Object Detection and Instance Segmentation.

Citing Contextual Transformer Networks

@article{cotnet,
  title={Contextual Transformer Networks for Visual Recognition},
  author={Li, Yehao and Yao, Ting and Pan, Yingwei and Mei, Tao},
  journal={arXiv preprint arXiv:2107.12292},
  year={2021}
}

Acknowledgements

Thanks the contribution of timm and awesome PyTorch team.