Home

Awesome

<div align="center">

Rotary Position Embedding for Vision Transformer

Byeongho Heo, Song Park, Dongyoon Han, Sangdoo Yun <br>

NAVER AI LAB

Paper Paper Paper

</div>

Official PyTorch implementation of RoPE-ViT "Rotary Position Embedding for Vision Transformer" | arxiv, ECCV

Abstract

Rotary Position Embedding (RoPE) performs remarkably on language models, especially for length extrapolation of Transformers. However, the impacts of RoPE on computer vision domains have been underexplored, even though RoPE appears capable of enhancing Vision Transformer (ViT) performance in a way similar to the language domain. This study provides a comprehensive analysis of RoPE when applied to ViTs, utilizing practical implementations of RoPE for 2D vision data. The analysis reveals that RoPE demonstrates impressive extrapolation performance, i.e., maintaining precision while increasing image resolution at inference. It eventually leads to performance improvement for ImageNet-1k, COCO detection, and ADE-20k segmentation. We believe this study provides thorough guidelines to apply RoPE into ViT, promising improved backbone performance with minimal extra computational overhead.

Updates

Getting Started

You can find RoPE implementations at each folder.

Performances

DeiT-III

RoPE-ViT

Swin Transformer

RoPE-ViT

Pre-trained weights

DeiT-III (400 epochs)

Model NameTop-1 (224)Top-1 (384)Weights
deit_small_patch16_LS80.479.4HF hub / Google drive
rope_axial_deit_small_patch16_LS80.980.0HF hub / Google drive
rope_mixed_deit_small_patch16_LS80.981.8HF hub / Google drive
rope_axial_ape_deit_small_patch16_LS80.781.2HF hub / Google drive
rope_mixed_ape_deit_small_patch16_LS80.981.7HF hub / Google drive
deit_base_patch16_LS83.482.8HF hub / Google drive
rope_axial_deit_base_patch16_LS83.683.9HF hub / Google drive
rope_mixed_deit_base_patch16_LS83.884.4HF hub / Google drive
rope_axial_ape_deit_base_patch16_LS83.783.8HF hub / Google drive
rope_mixed_ape_deit_base_patch16_LS83.884.3HF hub / Google drive
deit_large_patch16_LS84.684.2HF hub / Google drive
rope_axial_deit_large_patch16_LS84.785.1HF hub / Google drive
rope_mixed_deit_large_patch16_LS84.885.6HF hub / Google drive
rope_axial_ape_deit_large_patch16_LS84.785.1HF hub / Google drive
rope_mixed_ape_deit_large_patch16_LS84.985.5HF hub / Google drive

Swin Transformer (300 epochs)

Model NameTop-1 (224)Top-1 (384)Weights
swin_tiny_patch4_window7_22481.278.9
swin_rope_axial_tiny_patch4_window7_22481.379.2HF hub / Google drive
swin_rope_mixed_tiny_patch4_window7_22481.479.5HF hub / Google drive
swin_small_patch4_window7_22482.981.0
swin_rope_axial_small_patch4_window7_22483.180.9HF hub / Google drive
swin_rope_mixed_small_patch4_window7_22483.081.4HF hub / Google drive
swin_base_patch4_window7_22483.381.2
swin_rope_axial_base_patch4_window7_22483.681.8HF hub / Google drive
swin_rope_mixed_base_patch4_window7_22483.782.1HF hub / Google drive

How to cite

@inproceedings{heo2024ropevit,
    title={Rotary Position Embedding for Vision Transformer},
    author={Heo, Byeongho and Park, Song and Han, Dongyoon and Yun, Sangdoo},
    year={2024},
    booktitle={European Conference on Computer Vision (ECCV)},
}

License

This project is distributed under Apache-2.0, <br> except for the files below which originated from https://github.com/meta-llama/codellama.

RoPE-ViT
Copyright (c) 2024-present NAVER Cloud Corp.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.