Home

Awesome

ConvNeXt V2<br><sub>Official PyTorch Implementation</sub>

This repo contains the PyTorch version of 8 model definitions (Atto, Femto, Pico, Nano, Tiny, Base, Large, Huge), pre-training/fine-tuning code and pre-trained weights (converted from JAX weights trained on TPU) for our ConvNeXt V2 paper.

ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders<br> Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon and Saining Xie
<br>KAIST, Meta AI and New York University<br>

We propose a fully convolutional masked autoencoder framework (FCMAE) and a new Global Response Normalization (GRN) layer that can be added to the ConvNeXt architecture to enhance inter-channel feature competition. This co-design of self-supervised learning techniques and architectural improvement results in a new model family called ConvNeXt V2, which significantly improves the performance of pure ConvNets on various recognition benchmarks. We also provide pre-trained ConvNeXt V2 models of various sizes.

<p align="center"> <img src="figures/fcmae_convnextv2.png" width="70%" height="70%" class="center"> </p> <p align="center"> <img src="figures/model_scaling.png" width="50%" height="50%" class="center"> </p>

Results and Pre-trained Models

ImageNet-1K FCMAE pre-trained weights (self-supervised)

nameresolution#paramsmodel
ConvNeXt V2-A224x2243.7Mmodel
ConvNeXt V2-F224x2245.2Mmodel
ConvNeXt V2-P224x2249.1Mmodel
ConvNeXt V2-N224x22415.6Mmodel
ConvNeXt V2-T224x22428.6Mmodel
ConvNeXt V2-B224x22489Mmodel
ConvNeXt V2-L224x224198Mmodel
ConvNeXt V2-H224x224660Mmodel

ImageNet-1K fine-tuned models

nameresolutionacc@1#paramsFLOPsmodel
ConvNeXt V2-A224x22476.73.7M0.55Gmodel
ConvNeXt V2-F224x22478.55.2M0.78Gmodel
ConvNeXt V2-P224x22480.39.1M1.37Gmodel
ConvNeXt V2-N224x22481.915.6M2.45Gmodel
ConvNeXt V2-T224x22483.028.6M4.47Gmodel
ConvNeXt V2-B224x22484.989M15.4Gmodel
ConvNeXt V2-L224x22485.8198M34.4Gmodel
ConvNeXt V2-H224x22486.3660M115Gmodel

ImageNet-22K fine-tuned models

nameresolutionacc@1#paramsFLOPsmodel
ConvNeXt V2-N224x22482.115.6M2.45Gmodel
ConvNeXt V2-N384x38483.415.6M7.21Gmodel
ConvNeXt V2-T224x22483.928.6M4.47Gmodel
ConvNeXt V2-T384x38485.128.6M13.1Gmodel
ConvNeXt V2-B224x22486.889M15.4Gmodel
ConvNeXt V2-B384x38487.789M45.2Gmodel
ConvNeXt V2-L224x22487.3198M34.4Gmodel
ConvNeXt V2-L384x38488.2198M101.1Gmodel
ConvNeXt V2-H384x38488.7660M337.9Gmodel
ConvNeXt V2-H512x51288.9660M600.8Gmodel

Installation

Please check INSTALL.md for installation instructions.

Evaluation

We provide example evaluation commands for ConvNeXt V2-Base:

Single-GPU

python main_finetune.py \
--model convnextv2_base \
--eval true \
--resume /path/to/checkpoint \
--input_size 224 \
--data_path /path/to/imagenet-1k \

Multi-GPU

python -m torch.distributed.launch --nproc_per_node=8 main_finetune.py \
--model convnextv2_base \
--eval true \
--resume /path/to/checkpoint \
--input_size 224 \
--data_path /path/to/imagenet-1k \

Training

See TRAINING.md for pre-training and fine-tuning instructions.

Acknowledgement

This repository borrows from timm, ConvNeXt and MAE.

We thank Ross Wightman for the initial design of the small-compute ConvNeXt model variants and the associated training recipe. We also appreciate the helpful discussions and feedback provided by Kaiming He.

License

This project is released under the MIT license except ImageNet pre-trained and fine-tuned models which are licensed under a CC-BY-NC. Please see the LICENSE file for more information.

Citation

If you find this repository helpful, please consider citing:

@article{Woo2023ConvNeXtV2,
  title={ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders},
  author={Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon and Saining Xie},
  year={2023},
  journal={arXiv preprint arXiv:2301.00808},
}