Awesome
<img src="https://raw.githubusercontent.com/sberbank-ai/ru-dolph/master/pics/rudolph.png?token=GHSAT0AAAAAABQH6MST7ZEGAF274DV33K7KYOYRSBQ" height="60"/> RUDOLPH 🦌🎄☃️
One Hyper-Tasking Transformer can be creative as DALL-E and GPT-3 and smart as CLIP
RUssian Decoder On Language Picture Hyper-tasking (RUDOLPH) is a text-image-text transformer designed for an easy fine-tuning for a range of tasks: from generating images by text description and image classification to visual question answering and more. This model demonstrates the power of Hyper-tasking Transformers.
Hyper-tasking model is a generalized multi-tasking model, i.e., the model that can solve almost all tasks within supported modalities, mandatory including mutual pairwise translations between modalities (two modalities in case of RUDOLPH: images and Russian texts).
Models
The following table shows the values of the parameters corresponding to different RUDOLPH versions.
350M | 1.3B | 2.7B | |
---|---|---|---|
l | 64 | 128 | 384 |
r | 64 | 128 | 128 |
m | 16 | 32 | 24 |
n | 16 | 32 | 24 |
Sparse Attention Mask
350M
row - col - row - [last] conv
1.3B
row - col - row - [last] conv
2.7B
row - col - row - [last] conv
Installing
pip install rudolph==0.0.1rc10
Usage and Fine-Tuning
Usage and fine-tuning examples for different versions of RUDOLPH can be found in jupyters folder.
Citation
@misc{github2022ruDolph,
title = {RUDOLPH: One Hyper-Tasking Transformer can be creative as DALL-E and GPT-3 and smart as CLIP},
author = {AIRI},
year = {2022},
howpublished = {\url{https://github.com/ai-forever/ru-dolph}},
}