Transformers and related deep network architectures are summarized and implemented here.

Overview

Transformers: from NLP to CV

cover

This is a practical introduction to Transformers from Natural Language Processing (NLP) to Computer Vision (CV)

  1. Introduction
  2. ViT: Transformers for Computer Vision
  3. Visualizing the attention Open In Colab
  4. MLP-Mixer Open In Colab
  5. Hybrid MLP-Mixer + ViT Open In Colab
  6. ConvMixer Open In Colab
  7. Hybrid ConvMixer + MLP-Mixer Open In Colab

1) Introduction

What is wrong with RNNs and CNNs

Learning Representations of Variable Length Data is a basic building block of sequence-to-sequence learning for Neural machine translation, summarization, etc

  • Recurrent Neural Networks (RNNs) are natural fit variable-length sentences and sequences of pixels. But sequential computation inhibits parallelization. No explicit modeling of long and short-range dependencies.
  • Convolutional Neural Networks (CNNs) are trivial to parallelize (per layer) and exploit local dependencies. However, long-distance dependencies require many layers.

Attention!

The Transformer archeticture was proposed in the paper Attention is All You Need. As mentioned in the paper:

"We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely"

"Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train"

Machine Translation (MT) is the task of translating a sentence x from one language (the source language) to a sentence y in another language (the target language). One basic and well known neural network architecture for NMT is called sequence-to-sequence seq2seq and it involves two RNNs.

  • Encoder: RNN network that encodes the input sequence to a single vector (sentence encoding)
  • Decoder: RNN network that generates the output sequences conditioned on the encoder's output. (conditioned language model)

seqseq

The problem of the vanilla seq2seq is information bottleneck, where the encoding of the source sentence needs to capture all information about it in one vector.

As mentioned in the paper Neural Machine Translation by Jointly Learning to Align and Translate

"A potential issue with this encoder–decoder approach is that a neural network needs to be able to compress all the necessary information of a source sentence into a fixed-length vector. This may make it difficult for the neural network to cope with long sentences, especially those that are longer than the sentences in the training corpus."

attention001.gif

Attention provides a solution to the bottleneck problem

  • Core idea: on each step of the decoder, use a direct connection to the encoder to focus on a particular part of the source sequence. Attention is basically a technique to compute a weighted sum of the values (in the encoder), dependent on another value (in the decoder).

The main idea of attention can be summarized as mention the OpenAi's article:

"... every output element is connected to every input element, and the weightings between them are dynamically calculated based upon the circumstances, a process called attention."

Query and Values

  • In the seq2seq + attention model, each decoder hidden state (query) attends to all the encoder hidden states (values)
  • The weighted sum is a selective summary of the information contained in the values, where the query determines which values to focus on.
  • Attention is a way to obtain a fixed-size representation of an arbitrary set of representations (the values), dependent on some other representation (the query).

2) Transformers for Computer Vision

Transfomer based architectures were used not only for NLP but also for computer vision tasks. One important example is Vision Transformer ViT that represents a direct application of Transformers to image classification, without any image-specific inductive biases. As mentioned in the paper:

"We show that reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks"

"Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks"

vit

As we see, an input image is splitted into patches which are treated the same way as tokens (words) in an NLP application. Position embeddings are added to the patch embeddings to retain positional information. Similar to BERT’s class token, a classification head is attached here and used during pre-training and fine-tuning. The model is trained on image classification in supervised fashion.

Multi-head attention

The intuition is similar to have a multi-filter in CNNs. Here we can have multi-head attention, to give the network more capacity and ability to learn different attention patterns. By having multiple different layers that generate (or project) the vectors of queries, keys and values, we can learn multiple representations of these queries, keys and values.

mha

Where each token is projected (in a learnable way) into three vecrors Q, K, and V:

  • Q: Query vector: What I want
  • K: Key vector: What type of info I have
  • V: Value vector: What actual info I have

3) Visualizing the attention

Open In Colab

The basic ViT architecture is used, however with only one transformer layer with one (or four) head(s) for simplicity. The model is trained on CIFAR-10 classification task. The image is splitted in to 12 x 12 = 144 patches as usual, and after training, we can see the 144 x 144 attention scores (where each patch can attend to the others).

imgpatches

Attention map represents the correlation (attention) between all the tokens, where the sum of each row equals 1 representing the probability distribution of attention from a query patch to all others.

attmap

Long distance attention we can see two interesting patterns where background patch attends to long distance other background patches, and this flight patch attends to long distance other flight patches.

attpattern

We can try more heads and more transfomer layers and inspect the attention patterns.

attanim


4) MLP-Mixer

Open In Colab

MLP-Mixer is proposed in the paper An all-MLP Architecture for Vision. As mentioned in the paper:

"While convolutions and attention are both sufficient for good performance, neither of them is necessary!"

"Mixer is a competitive but conceptually and technically simple alternative, that does not use convolutions or self-attention"

Mixer accepts a sequence of linearly projected image patches (tokens) shaped as a “patches × channels” table as an input, and maintains this dimensionality. Mixer makes use of two types of MLP layers:

mixer

  • Channel-mixing MLPs allow communication between different channels, they operate on each token independently and take individual rows of the table as inputs
  • Token-mixing MLPs allow communication between different spatial locations (tokens); they operate on each channel independently and take individual columns of the table as inputs.

These two types of layers are interleaved to enable interaction of both input dimensions.

"The computational complexity of the network is linear in the number of input patches, unlike ViT whose complexity is quadratic"

"Unlike ViTs, Mixer does not use position embeddings"

It is commonly observed that the first layers of CNNs tend to learn detectors that act on pixels in local regions of the image. In contrast, Mixer allows for global information exchange in the token-mixing MLPs.

"Recall that the token-mixing MLPs allow global communication between different spatial locations."

vizmixer

The figure shows hidden units of the four token-mixing MLPs of Mixer trained on CIFAR10 dataset.


5) Hybrid MLP-Mixer and ViT

Open In Colab

We can use both the MLP-Mixer and ViT in one network architecture to get the best of both worlds.

mixvit

Adding a few self-attention sublayers to mixer is expected to offer a simple way to trade off speed for accuracy.


6) CovMixer

Open In Colab

Patches Are All You Need?

Is the performance of ViTs due to the inherently more powerful Transformer architecture, or is it at least partly due to using patches as the input representation.

ConvMixer, an extremely simple model that is similar in many aspects to the ViT and the even-more-basic MLP-Mixer

Despite its simplicity, ConvMixer outperforms the ViT, MLP-Mixer, and some of their variants for similar parameter counts and data set sizes, in addition to outperforming classical vision models such as the ResNet.

While self-attention and MLPs are theoretically more flexible, allowing for large receptive fields and content-aware behavior, the inductive bias of convolution is well-suited to vision tasks and leads to high data efficiency.

ConvMixers are substantially slower at inference than the competitors!

conmixer01


7) Hybrid MLP-Mixer and CovMixer

Open In Colab

Once again, we can use both the MLP-Mixer and ConvMixer in one network architecture to get the best of both worlds. Here is a simple example.

convmlpmixer


References and more information

Owner
Ibrahim Sobh
Ibrahim Sobh
Implementation of Memorizing Transformers (ICLR 2022), attention net augmented with indexing and retrieval of memories using approximate nearest neighbors, in Pytorch

Memorizing Transformers - Pytorch Implementation of Memorizing Transformers (ICLR 2022), attention net augmented with indexing and retrieval of memori

Phil Wang 364 Jan 06, 2023
Bpe algorithm can finetune tokenizer - Bpe algorithm can finetune tokenizer

"# bpe_algorithm_can_finetune_tokenizer" this is an implyment for https://github

张博 1 Feb 02, 2022
Unsupervised Abstract Reasoning for Raven’s Problem Matrices

Unsupervised Abstract Reasoning for Raven’s Problem Matrices This code is the implementation of our TIP paper. This is the first unsupervised abstract

Tao Zhuo 9 Dec 17, 2022
We have built a Voice based Personal Assistant for people to access files hands free in their device using natural language processing.

Voice Based Personal Assistant We have built a Voice based Personal Assistant for people to access files hands free in their device using natural lang

Rushabh 2 Nov 13, 2021
CCKS-Title-based-large-scale-commodity-entity-retrieval-top1

- 基于标题的大规模商品实体检索top1 一、任务介绍 CCKS 2020:基于标题的大规模商品实体检索,任务为对于给定的一个商品标题,参赛系统需要匹配到该标题在给定商品库中的对应商品实体。 输入:输入文件包括若干行商品标题。 输出:输出文本每一行包括此标题对应的商品实体,即给定知识库中商品 ID,

43 Nov 11, 2022
Transformer Based Korean Sentence Spacing Corrector

TKOrrector Transformer Based Korean Sentence Spacing Corrector License Summary This solution is made available under Apache 2 license. See the LICENSE

Paul Hyung Yuel Kim 3 Apr 18, 2022
A flask application to predict the speech emotion of any .wav file.

This is a speech emotion recognition app. It will allow you to train a modular MLP model with the RAVDESS dataset, and then use that model with a flask application to predict the speech emotion of an

Aryan Vijaywargia 2 Dec 15, 2021
Machine Learning Course Project, IMDB movie review sentiment analysis by lstm, cnn, and transformer

IMDB Sentiment Analysis This is the final project of Machine Learning Courses in Huazhong University of Science and Technology, School of Artificial I

Daniel 0 Dec 27, 2021
A cross platform OCR Library based on PaddleOCR & OnnxRuntime

A cross platform OCR Library based on PaddleOCR & OnnxRuntime

RapidOCR Team 767 Jan 09, 2023
This github repo is for Neurips 2021 paper, NORESQA A Framework for Speech Quality Assessment using Non-Matching References.

NORESQA: Speech Quality Assessment using Non-Matching References This is a Pytorch implementation for using NORESQA. It contains minimal code to predi

Meta Research 36 Dec 08, 2022
Scene Text Retrieval via Joint Text Detection and Similarity Learning

This is the code of "Scene Text Retrieval via Joint Text Detection and Similarity Learning". For more details, please refer to our CVPR2021 paper.

79 Nov 29, 2022
🦅 Pretrained BigBird Model for Korean (up to 4096 tokens)

Pretrained BigBird Model for Korean What is BigBird • How to Use • Pretraining • Evaluation Result • Docs • Citation 한국어 | English What is BigBird? Bi

Jangwon Park 183 Dec 14, 2022
[Preprint] Escaping the Big Data Paradigm with Compact Transformers, 2021

Compact Transformers Preprint Link: Escaping the Big Data Paradigm with Compact Transformers By Ali Hassani[1]*, Steven Walton[1]*, Nikhil Shah[1], Ab

SHI Lab 367 Dec 31, 2022
Wake: Context-Sensitive Automatic Keyword Extraction Using Word2vec

Wake Wake: Context-Sensitive Automatic Keyword Extraction Using Word2vec Abstract استخراج خودکار کلمات کلیدی متون کوتاه فارسی با استفاده از word2vec ب

Omid Hajipoor 1 Dec 17, 2021
A Facebook Messenger Chatbot using NLP

A Facebook Messenger Chatbot using NLP This project is about creating a messenger chatbot using basic NLP techniques and models like Logistic Regressi

6 Nov 20, 2022
AI_Assistant - This is a Python based Voice Assistant.

This is a Python based Voice Assistant. This was programmed to increase my understanding of python and also how the in-general Voice Assistants work.

1 Jan 06, 2022
The training code for the 4th place model at MDX 2021 leaderboard A.

The training code for the 4th place model at MDX 2021 leaderboard A.

Chin-Yun Yu 32 Dec 18, 2022
GPT-3: Language Models are Few-Shot Learners

GPT-3: Language Models are Few-Shot Learners arXiv link Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-trainin

OpenAI 12.5k Jan 05, 2023
The SVO-Probes Dataset for Verb Understanding

The SVO-Probes Dataset for Verb Understanding This repository contains the SVO-Probes benchmark designed to probe for Subject, Verb, and Object unders

DeepMind 20 Nov 30, 2022
IMS-Toucan is a toolkit to train state-of-the-art Speech Synthesis models

IMS-Toucan is a toolkit to train state-of-the-art Speech Synthesis models. Everything is pure Python and PyTorch based to keep it as simple and beginner-friendly, yet powerful as possible.

Digital Phonetics at the University of Stuttgart 247 Jan 05, 2023