Home

Awesome

Transformers: from NLP to CV

<!-- ![cover](images/cover001.jpg) -->

cover

This is a practical introduction to Transformers from Natural Language Processing (NLP) to Computer Vision (CV)

  1. Introduction
  2. ViT: Transformers for Computer Vision
  3. Visualizing the attention <a href="https://colab.research.google.com/drive/1bE7aJedF2U-H_Byt_4vM8UQ4ZxHc3aji?usp=sharing" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
  4. MLP-Mixer <a href="https://colab.research.google.com/drive/1T2zEG8iTG4e-YOV-AQPAD41FJ3ud3qLH?usp=sharing" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
  5. Hybrid MLP-Mixer + ViT <a href="https://colab.research.google.com/drive/1sHyV7_vpvmBiGyK5BkviyPq6YSw3a2r9?usp=sharing" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
  6. ConvMixer <a href="https://colab.research.google.com/drive/1F1rtlQY0vZ9BVTtP7CuvyeBv2hEOpPTo?usp=sharing" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
  7. Hybrid ConvMixer + MLP-Mixer <a href="https://colab.research.google.com/drive/1F1rtlQY0vZ9BVTtP7CuvyeBv2hEOpPTo?usp=sharing" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>

1) Introduction

What is wrong with RNNs and CNNs

Learning Representations of Variable Length Data is a basic building block of sequence-to-sequence learning for Neural machine translation, summarization, etc

Attention!

The Transformer architecture was proposed in the paper Attention is All You Need. As mentioned in the paper:

"We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely"

"Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train"

Machine Translation (MT) is the task of translating sentence x from one language (the source language) to sentence y in another language (the target language). One basic and well-known neural network architecture for NMT is called sequence-to-sequence seq2seq and it involves two RNNs.

seqseq

The problem of the vanilla seq2seq is information bottleneck, where the encoding of the source sentence needs to capture all information about it in one vector.

As mentioned in the paper Neural Machine Translation by Jointly Learning to Align and Translate

"A potential issue with this encoder–decoder approach is that a neural network needs to be able to compress all the necessary information of a source sentence into a fixed-length vector. This may make it difficult for the neural network to cope with long sentences, especially those that are longer than the sentences in the training corpus."

attention001.gif

Attention provides a solution to the bottleneck problem

The main idea of attention can be summarized as mentioned in the OpenAi's article:

"... every output element is connected to every input element, and the weightings between them are dynamically calculated based upon the circumstances, a process called attention."

Query and Values


2) Transformers for Computer Vision

Transformer based architectures were used not only for NLP but also for computer vision tasks. One important example is Vision Transformer ViT that represents a direct application of Transformers to image classification, without any image-specific inductive biases. As mentioned in the paper:

"We show that reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks"

"Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks"

vit

As we see, an input image is splitted into patches which are treated the same way as tokens (words) in an NLP application. Position embeddings are added to the patch embeddings to retain positional information. Similar to BERT’s class token, a classification head is attached here and used during pre-training and fine-tuning. The model is trained on image classification in supervised fashion.

Multi-head attention

The intuition is similar to have a multi-filter in CNNs. Here we can have multi-head attention, to give the network more capacity and ability to learn different attention patterns. By having multiple different layers that generate (or project) the vectors of queries, keys and values, we can learn multiple representations of these queries, keys and values.

mha

Where each token is projected (in a learnable way) into three vecrors Q, K, and V:


3) Visualizing the attention

<a href="https://colab.research.google.com/drive/1bE7aJedF2U-H_Byt_4vM8UQ4ZxHc3aji?usp=sharing" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>

The basic ViT architecture is used, however with only one transformer layer with one (or four) head(s) for simplicity. The model is trained on CIFAR-10 classification task. The image is splitted in to 12 x 12 = 144 patches as usual, and after training, we can see the 144 x 144 attention scores (where each patch can attend to the others).

imgpatches

Attention map represents the correlation (attention) between all the tokens, where the sum of each row equals 1 representing the probability distribution of attention from a query patch to all others.

attmap

Long distance attention we can see two interesting patterns where background patch attends to long distance other background patches, and this flight patch attends to long distance other flight patches.

attpattern

We can try more heads and more transfomer layers and inspect the attention patterns.

attanim


4) MLP-Mixer

<a href="https://colab.research.google.com/drive/1T2zEG8iTG4e-YOV-AQPAD41FJ3ud3qLH?usp=sharing" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>

MLP-Mixer is proposed in the paper An all-MLP Architecture for Vision. As mentioned in the paper:

"While convolutions and attention are both sufficient for good performance, neither of them is necessary!"

"Mixer is a competitive but conceptually and technically simple alternative, that does not use convolutions or self-attention"

Mixer accepts a sequence of linearly projected image patches (tokens) shaped as a “patches × channels” table as an input, and maintains this dimensionality. Mixer makes use of two types of MLP layers:

mixer

These two types of layers are interleaved to enable interaction of both input dimensions.

"The computational complexity of the network is linear in the number of input patches, unlike ViT whose complexity is quadratic"

"Unlike ViTs, Mixer does not use position embeddings"

It is commonly observed that the first layers of CNNs tend to learn detectors that act on pixels in local regions of the image. In contrast, Mixer allows for global information exchange in the token-mixing MLPs.

"Recall that the token-mixing MLPs allow global communication between different spatial locations."

vizmixer

The figure shows hidden units of the four token-mixing MLPs of Mixer trained on CIFAR10 dataset.


5) Hybrid MLP-Mixer and ViT

<a href="https://colab.research.google.com/drive/1sHyV7_vpvmBiGyK5BkviyPq6YSw3a2r9?usp=sharing" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>

We can use both the MLP-Mixer and ViT in one network architecture to get the best of both worlds.

mixvit

Adding a few self-attention sublayers to mixer is expected to offer a simple way to trade off speed for accuracy.


6) CovMixer

<a href="https://colab.research.google.com/drive/1F1rtlQY0vZ9BVTtP7CuvyeBv2hEOpPTo?usp=sharing" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>

Patches Are All You Need?

Is the performance of ViTs due to the inherently more powerful Transformer architecture, or is it at least partly due to using patches as the input representation.

ConvMixer, an extremely simple model that is similar in many aspects to the ViT and the even-more-basic MLP-Mixer

Despite its simplicity, ConvMixer outperforms the ViT, MLP-Mixer, and some of their variants for similar parameter counts and data set sizes, in addition to outperforming classical vision models such as the ResNet.

While self-attention and MLPs are theoretically more flexible, allowing for large receptive fields and content-aware behavior, the inductive bias of convolution is well-suited to vision tasks and leads to high data efficiency.

ConvMixers are substantially slower at inference than the competitors!

conmixer01


7) Hybrid MLP-Mixer and CovMixer

<a href="https://colab.research.google.com/drive/1F1rtlQY0vZ9BVTtP7CuvyeBv2hEOpPTo?usp=sharing" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>

Once again, we can use both the MLP-Mixer and ConvMixer in one network architecture to get the best of both worlds. Here is a simple example.

convmlpmixer


References and more information