Home

Awesome

Polish RoBERTa

This repository contains pre-trained RoBERTa models for Polish as well as evaluation code for several Polish linguistic tasks. The released models were trained using Fairseq toolkit in the National Information Processing Institute, Warsaw, Poland. We provide two models based on BERT base and BERT large architectures. Two versions of each model are available: one for Fairseq and one for Huggingface Transformers.

Updates

08.03.2022 - Base and Large Polish Longformer models have been added to the Huggingface Hub. The models were initialized with Polish RoBERTa (v2) weights and then fine-tuned on a corpus of long documents, ranging from 1024 to 4096 tokens.

19.02.2022 - The models are now available on the Huggingface Hub.

24.01.2022 - Polish DistilRoBERTa model added. The model was trained using knowledge distillation with RoBERTa-v2 base as a teacher model. Distilled version has only has only half the encoder blocks of the original model, so it is suitable for deployment on devices with limited resources such as smartphones.

18.01.2022 - We release the second version of the large model. This version has been trained using the same procedure as RoBERTa‑base-v2: unigram tokenizer, whole word masking, more update steps with lower batch size. We also utilised larger vocabulary of 128k entries.

21.03.2021 - We release a new version of the base model. The updated model has been trained on the same corpus as the original model but we used different hyperparameters. We made the following changes: 1) Sentencepiece Unigram model was used instead of BPE, 2) The model was trained with whole-word masking objective instead of classic token masking, 3) We utilized the full context of 512 tokens so training examples could include more than one sentence (the original model was trained on single sencentes only), 4) Longer pretraining (400k steps).

Models

<table> <thead> <th>Model</th> <th>L / H / A*</th> <th>Batch size</th> <th>Update steps</th> <th>Corpus size</th> <th>KLEJ Score**</th> <th>Fairseq</th> <th>Transformers</th> </thead> <tr> <td>RoBERTa&nbsp;(base)</td> <td>12&nbsp;/&nbsp;768&nbsp;/&nbsp;12</td> <td>8k</td> <td>125k</td> <td>~20GB</td> <td>85.39</td> <td> <a href="https://github.com/sdadas/polish-roberta/releases/download/models/roberta_base_fairseq.zip">v0.9.0</a> </td> <td> <a href="https://github.com/sdadas/polish-roberta/releases/download/models-transformers-v3.4.0/roberta_base_transformers.zip">v3.4</a> </td> </tr> <tr> <td>RoBERTa&#8209;v2&nbsp;(base)</td> <td>12&nbsp;/&nbsp;768&nbsp;/&nbsp;12</td> <td>8k</td> <td>400k</td> <td>~20GB</td> <td>86.72</td> <td> <a href="https://github.com/sdadas/polish-roberta/releases/download/models-v2/roberta_base_fairseq.zip">v0.10.1</a> </td> <td> <a href="https://github.com/sdadas/polish-roberta/releases/download/models-v2/roberta_base_transformers.zip">v4.4</a> </td> </tr> <tr> <td>RoBERTa&nbsp;(large)</td> <td>24&nbsp;/&nbsp;1024&nbsp;/&nbsp;16</td> <td>30k</td> <td>50k</td> <td>~135GB</td> <td>87.69</td> <td> <a href="https://github.com/sdadas/polish-roberta/releases/download/models/roberta_large_fairseq.zip">v0.9.0</a> </td> <td> <a href="https://github.com/sdadas/polish-roberta/releases/download/models-transformers-v3.4.0/roberta_large_transformers.zip">v3.4</a> </td> </tr> <tr> <td>RoBERTa&#8209;v2&nbsp;(large)</td> <td>24&nbsp;/&nbsp;1024&nbsp;/&nbsp;16</td> <td>2k</td> <td>400k</td> <td>~200GB</td> <td>88.87</td> <td> <a href="https://github.com/sdadas/polish-roberta/releases/download/models-v2/roberta_large_fairseq.zip">v0.10.2</a> </td> <td> <a href="https://github.com/sdadas/polish-roberta/releases/download/models-v2/roberta_large_transformers.zip">v4.14</a> </td> </tr> <tr> <td>DistilRoBERTa</td> <td>6&nbsp;/&nbsp;768&nbsp;/&nbsp;12</td> <td>1k</td> <td>10ep.</td> <td>~20GB</td> <td>84.55</td> <td> n/a </td> <td> <a href="https://github.com/sdadas/polish-roberta/releases/download/models-v2/distilroberta_transformers.zip">v4.13</a> </td> </tr> </table>

* L - the number of encoder blocks, H - hidden size, A - the number of attention heads <br/> ** Average KLEJ score over 5 runs, see evaluation section for detailed results<br/>

More details are available in the paper Pre-training Polish Transformer-based Language Models at Scale.

@InProceedings{dadas2020pretraining,
  title="Pre-training Polish Transformer-Based Language Models at Scale",
  author="Dadas, S{\l}awomir and Pere{\l}kiewicz, Micha{\l} and Po{\'{s}}wiata, Rafa{\l}",
  booktitle="Artificial Intelligence and Soft Computing",
  year="2020",
  publisher="Springer International Publishing",
  pages="301--314",
  isbn="978-3-030-61534-5"
}

Getting started

How to use with Fairseq

import os
from fairseq.models.roberta import RobertaModel, RobertaHubInterface
from fairseq import hub_utils

model_path = "roberta_large_fairseq"
loaded = hub_utils.from_pretrained(
    model_name_or_path=model_path,
    data_name_or_path=model_path,
    bpe="sentencepiece",
    sentencepiece_vocab=os.path.join(model_path, "sentencepiece.bpe.model"),
    load_checkpoint_heads=True,
    archive_map=RobertaModel.hub_models(),
    cpu=True
)
roberta = RobertaHubInterface(loaded['args'], loaded['task'], loaded['models'][0])
roberta.eval()
input = roberta.encode("Zażółcić gęślą jaźń.")
output = roberta.extract_features(input)
print(output[0][1])

How to use with HuggingFace Transformers

import torch, os
from transformers import RobertaModel, AutoModel, PreTrainedTokenizerFast

model_dir = "roberta_base_transformers"
tokenizer = PreTrainedTokenizerFast(tokenizer_file=os.path.join(model_dir, "tokenizer.json"))
model: RobertaModel = AutoModel.from_pretrained(model_dir)
input = tokenizer.encode("Zażółcić gęślą jaźń.")
output = model(torch.tensor([input]))[0]
print(output[0][1])

Evaluation

To replicate our experiments, first download the required datasets using download_data.py script:

python download_data.py

Next, run run_tasks.py script to prepare the data, fine-tune and evaluate the model. We used the following parameters for each task:

python run_tasks.py --arch roberta_base --model_dir roberta_base_fairseq --train-epochs 10 --tasks KLEJ-NKJP --fp16 True --max-sentences 8 --update-freq 2
python run_tasks.py --arch roberta_base --model_dir roberta_base_fairseq --train-epochs 10 --tasks KLEJ-CDS-E --fp16 True --max-sentences 8 --update-freq 2
python run_tasks.py --arch roberta_base --model_dir roberta_base_fairseq --train-epochs 10 --tasks KLEJ-CDS-R --fp16 True --max-sentences 8 --update-freq 2
python run_tasks.py --arch roberta_base --model_dir roberta_base_fairseq --train-epochs 1 --tasks KLEJ-CBD --fp16 True --max-sentences 8 --update-freq 4 --resample 0:0.75,1:3
python run_tasks.py --arch roberta_base --model_dir roberta_base_fairseq --train-epochs 10 --tasks KLEJ-POLEMO-IN --fp16 True --max-sentences 8 --update-freq 2
python run_tasks.py --arch roberta_base --model_dir roberta_base_fairseq --train-epochs 10 --tasks KLEJ-POLEMO-OUT --fp16 True --max-sentences 8 --update-freq 2
python run_tasks.py --arch roberta_base --model_dir roberta_base_fairseq --train-epochs 10 --tasks KLEJ-DYK --fp16 True --max-sentences 8 --update-freq 4 --resample 0:1,1:3
python run_tasks.py --arch roberta_base --model_dir roberta_base_fairseq --train-epochs 10 --tasks KLEJ-PSC --fp16 True --max-sentences 8 --update-freq 4 --resample 0:1,1:3
python run_tasks.py --arch roberta_base --model_dir roberta_base_fairseq --train-epochs 10 --tasks KLEJ-ECR --fp16 True --max-sentences 8 --update-freq 2
python run_tasks.py --arch roberta_base --model_dir roberta_base_fairseq --train-epochs 10 --tasks 8TAGS --fp16 True --max-sentences 8 --update-freq 2
python run_tasks.py --arch roberta_base --model_dir roberta_base_fairseq --train-epochs 10 --tasks SICK-E --fp16 True --max-sentences 8 --update-freq 2
python run_tasks.py --arch roberta_base --model_dir roberta_base_fairseq --train-epochs 10 --tasks SICK-R --fp16 True --max-sentences 8 --update-freq 2

Evaluation results on KLEJ Benchmark

Below we show the evaluation results of our models on the tasks included in KLEJ Benchmark. We fine-tuned both models 5 times for each task. Detailed scores for each run and averaged scores are presented in Table 1 and Table 2.

<details> <summary>Table 1. KLEJ results for RoBERTa base model</summary>
RunNKJPCDSC‑ECDSC‑RCBDPolEmo‑INPolEmo‑OUTDYKPSCARAvg
193.1593.3094.2666.6791.9778.7466.8698.6387.7585.70
293.9394.2093.9468.1691.8375.9165.9398.7787.9385.62
394.2294.2094.0469.2390.1776.9265.6999.2487.7685.72
493.9794.7093.9863.8190.4476.3265.1899.3987.5885.04
593.6394.0093.9665.9590.5874.0965.9298.4887.0884.85
Avg93.7894.0894.0466.7791.0076.4065.9298.9087.6285.39
</details> <details> <summary>Table 2. KLEJ results for RoBERTa-v2 base model</summary>
RunNKJPCDSC‑ECDSC‑RCBDPolEmo‑INPolEmo‑OUTDYKPSCARAvg
194.8094.2094.3069.6290.5878.7471.2398.6287.9986.68
294.2794.5094.4470.6790.1778.9569.6499.0887.9886.63
393.7394.3094.6470.6791.4178.1474.4498.9287.6487.10
494.0793.9094.5870.0091.0078.1469.9498.9387.2286.42
594.3194.2094.7170.4691.0077.9471.6798.4888.1586.77
Avg94.2494.2294.5470.2890.8378.3871.3898.8187.8086.72
</details> <details> <summary>Table 3. KLEJ results for RoBERTa large model</summary>
RunNKJPCDSC‑ECDSC‑RCBDPolEmo‑INPolEmo‑OUTDYKPSCARAvg
194.3193.5094.6372.3992.8080.5471.8798.6388.8287.50
295.1493.9094.9369.8292.8082.5973.3998.9488.9687.83
395.2493.3094.6171.5991.4182.1975.3598.6489.3187.96
494.4693.2094.9671.0892.8082.3970.5999.0988.6087.46
594.4693.0094.8269.8392.1183.0074.8598.7988.6587.72
Avg94.7293.3894.7970.9492.3882.1473.2198.8288.8787.69
</details> <details> <summary>Table 4. KLEJ results for RoBERTa-v2 large model</summary>
RunNKJPCDSC‑ECDSC‑RCBDPolEmo‑INPolEmo‑OUTDYKPSCARAvg
195.8294.1095.0274.5493.0785.4376.7098.4789.2489.15
295.7293.9095.1074.5593.4984.0174.7198.9389.0288.83
395.4394.3095.3670.9793.2182.5976.6198.1589.3188.44
495.9794.4095.1275.1092.8085.8374.0598.9389.1489.04
595.9294.7095.0975.6693.0782.7975.3598.6288.7888.89
Avg95.7794.2895.1474.1693.1384.1375.4898.6289.1088.87
</details> <details> <summary>Table 5. KLEJ results for DistilRoBERTa model</summary>
RunNKJPCDSC‑ECDSC‑RCBDPolEmo‑INPolEmo‑OUTDYKPSCARAvg
193.5493.0093.5767.6088.7877.5361.7993.5987.5484.10
293.7892.3093.5469.2090.0378.9561.9895.9287.2384.77
393.1593.2093.5568.4789.6179.1560.0595.1587.1984.39
493.2993.3093.3068.9990.1778.3462.1895.8086.8384.69
593.6893.4093.6269.5790.4477.7361.1196.2687.4484.81
Avg93.4993.0493.5268.7789.8178.3461.4295.3587.2484.55
</details>

Evaluation results on other tasks

TaskTask typeMetricBase model (v1)Large model (v1)
SICK-ETextual entailmentAccuracy86.1387.67
SICK-RSemantic relatednessSpearman correlation82.2685.63
Poleval 2018 - NERNamed entity recognitionF1 score (exact match)87.9489.98
8TAGSMulti class classificationAccuracy77.2280.84