Home

Awesome

Awesome Transformer for Vision Resources List Awesome

A curated list of papers & resources linked to Transformer-based research mainly for vision and graphics tasks.

Contents

<a name="papers"></a>

Papers

<a name="papers-ori"></a>

Original Paper

Attention Is All You Need. Ashish Vaswani*, Noam Shazeer*, Niki Parmar*, Jakob Uszkoreit*, Llion Jones*, Aidan N. Gomez*, Łukasz Kaiser*, Illia Polosukhin*. NIPs 2017.

<a name="papers-2d"></a>

2D Vision Tasks

<a name="papers-classification"></a>

Classification

AN IMAGE IS WORTH 16X16 WORDS: TRANSFORMERS FOR IMAGE RECOGNITION AT SCALE. Alexey Dosovitskiy∗, Lucas Beyer∗, Alexander Kolesnikov∗, Dirk Weissenborn∗, Xiaohua Zhai∗, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. Arxiv 2020.

<a name="papers-detection"></a>

Detection

Fast Convergence of DETR with Spatially Modulated Co-Attention. Peng Gao, Minghang Zheng, Xiaogang Wang, Jifeng Dai, Hongsheng Li. Arxiv 2021.

End-to-End Object Detection with Adaptive Clustering Transformer. Minghang Zheng, Peng Gao, Xiaogang Wang, Hongsheng Li, Hao Dong. Arxiv 2020.

Toward Transformer-Based Object Detection. Josh Beal*, Eric Kim*, Eric Tzeng, Dong Huk Park, Andrew Zhai, Dmitry Kislyuk. Arxiv 2020.

Rethinking Transformer-based Set Prediction for Object Detection. Zhiqing Sun*, Shengcao Cao*, Yiming Yang, Kris Kitani. Arxiv 2020.

UP-DETR: Unsupervised Pre-training for Object Detection with Transformers. Zhigang Dai1, Bolun Cai, Yugeng Lin, Junying Chen. Arxiv 2020.

DEFORMABLE DETR: DEFORMABLE TRANSFORMERS FOR END-TO-END OBJECT DETECTION. Xizhou Zhu∗, Weijie Su∗, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai. Arxiv 2020.

End-to-End Object Detection with Transformers. Nicolas Carion*, Francisco Massa*, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. ECCV 2020.

<a name="papers-segmentation"></a>

Segmentation

Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers. Sixiao Zheng, Jiachen Lu, Hengshuang Zhao, Xiatian Zhu, Zekun Luo, Yabiao Wang, Yanwei Fu, Jianfeng Feng, Tao Xiang, Philip H.S. Torr, Li Zhang. Arxiv 2020.

End-to-End Video Instance Segmentation with Transformers. Yuqing Wang, Zhaoliang Xu, Xinlong Wang, Chunhua Shen, Baoshan Cheng, Hao Shen, Huaxia Xia. Arxiv 2020.

<a name="papers-tracking"></a>

Tracking

TransTrack: Multiple-Object Tracking with Transformer. Peize Sun, Yi Jiang, Rufeng Zhang, Enze Xie, Jinkun Cao, Xinting Hu, Tao Kong, Zehuan Yuan, Changhu Wang, Ping Luo. Arxiv 2020.

<a name="papers-image-synthesis"></a>

Image Synthesis

Taming Transformers for High-Resolution Image Synthesis. Patrick Esser*, Robin Rombach*, Bjorn Ommer. Arxiv 2020.

<a name="papers-action"></a>

Action Understanding

Video Action Transformer Network. Rohit Girdhar, Joao Carreira, Carl Doersch, Andrew Zisserman. CVPR 2019.

<a name="papers-3d"></a>

3D Vision Tasks

<a name="papers-point-cloud"></a>

Point Cloud Processing

PCT: Point Cloud Transformer. Meng-Hao Guo, Jun-Xiong Cai, Zheng-Ning Liu, Tai-Jiang Mu, Ralph R. Martin, Shi-Min Hu. Arxiv 2020.

Point Transformer. Hengshuang Zhao, Li Jiang, Jiaya Jia, Philip Torr, Vladlen Koltun. Arxiv 2020.

<a name="papers-motion"></a>

Motion Modeling

Learning to Generate Diverse Dance Motions with Transformer. Jiaman Li, Yihang Yin, Hang Chu, Yi Zhou, Tingwu Wang, Sanja Fidler, Hao Li. Arxiv 2020.

A Spatio-temporal Transformer for 3D Human Motion Prediction. Emre Aksan*, Peng Cao*, Manuel Kaufmann, Otmar Hilliges. Arxiv 2020.

<a name="papers-body"></a>

Human Body Modeling

End-to-End Human Pose and Mesh Reconstruction with Transformers. Kevin Lin, Lijuan Wang, Zicheng Liu. Arxiv 2020.

<a name="papers-others"></a>

Others

<a name="papers-music"></a>

Music Modeling

MUSIC TRANSFORMER: GENERATING MUSIC WITH LONG-TERM STRUCTURE. Cheng-Zhi Anna Huang, Ashish Vaswani, Jakob Uszkoreit, Noam Shazeer, Ian Simon, Curtis Hawthorne, Andrew M. Dai, Matthew D. Hoffman, Monica Dinculescu, Douglas Eck. Arxiv 2018.

Contributing

Please see CONTRIBUTING for details.