Home

Awesome

🇺🇦 Ukrainian ELECTRA model

DOI

In this repository we monitor all experiments for our trained ELECTRA model for Ukrainian.

Made with 🤗, 🥨 and ❤️ from Munich.

Changelog

Training

The source data for the Ukrainian ELECTRA model consists of two corpora:

The resulting corpus has a size of 31GB (uncompressed). For the Wikipedia dump we use newlines as document boundaries (ELECTRA pre-training does support it).

We then apply two preprocessing steps:

Sentence splitting. We use tokenize-uk in order to perform sentence splitting.

Filtering. We discard all sentences that are shorter than 5 tokens.

The final training corpus has a size of 30GB and consits of exactly 2,402,761,324 tokens.

The Ukrainian ELECTRA model was trained for 1M steps in total using a batch size of 128. We pretty much following the ELECTRA training procedure as used for BERTurk.

Experiments

We use latest Flair version (release 0.12) for performing experiments on NER and PoS Tagging downstream tasks. Older experiments can be found under this tag.

The script flair-fine-tuner.py is used to perform a basic hyper-parameter search. The scripts expects a json-based configuration file. Examples can be found in the ./configs/ner and ./configs/pos folders of this repository.

PoS Tagging

Ukrainian-IU

Description:

UD Ukrainian comprises 122K tokens in 7000 sentences of fiction, news, opinion articles, Wikipedia, legal documents, letters, posts, and comments — from the last 15 years, as well as from the first half of the 20th century.

Details:

We use the UD_UKRAINIAN dataset and perform basic hyper-parameter search.

Results (Development set, best hyper-param config):

ModelRun 1Run 2Run 3Run 4Run 5Avg.
bert-base-multilingual-cased98.0398.1198.1898.0297.9598.06 ± 0.09
xlm-roberta-base98.5798.4798.4998.4098.4398.47 ± 0.06
facebook/xlm-v-base98.5098.4898.5498.5698.6098.54 ± 0.05
Ukrainian ELECTRA (1M)98.5798.6498.6098.5698.6298.60 ± 0.03

Results (Test set, best hyper-param config on Development set):

ModelRun 1Run 2Run 3Run 4Run 5Avg.
bert-base-multilingual-cased97.9097.8997.9897.8497.9497.91 ± 0.05
xlm-roberta-base98.3398.5198.4398.4198.4398.42 ± 0.06
facebook/xlm-v-base98.3998.3798.4798.1598.4498.36 ± 0.13
Ukrainian ELECTRA (1M)98.6398.5598.5398.5098.5998.56 ± 0.05

NER

We use the train split (train.iob) from this lang-uk repository and create 5 random splits train and development splits. We perform hyper-parameter search on these 5 splits and select the best configuration (based on F1-Score on development set). In the final step we use the best hyper-parameter configuration, train 5 models with development data and evaluate them on the test split (test.iob) from the mentioned lang-uk repo.

The script create_random_split.py was used to create 5 random splits and all created data can be found in the ./ner_experiments folder in this repo.

Results (Development set, best hyper-param config):

ModelRun 1Run 2Run 3Run 4Run 5Avg.
bert-base-multilingual-cased90.5589.8990.1690.8490.8190.45 ± 0.42
xlm-roberta-base92.2591.9991.7290.5491.3591.57 ± 0.67
facebook/xlm-v-base90.3888.8189.6291.3491.6690.36 ± 1.18
Ukrainian ELECTRA (1M)94.1792.1392.7491.4592.2392.54 ± 1.02

Results (Test set, best hyper-param config on Development set incl. development data):

ModelRun 1Run 2Run 3Run 4Run 5Avg.
bert-base-multilingual-cased84.2085.6185.1185.1783.9084.80 ± 0.72
xlm-roberta-base87.8587.3987.3188.1586.1987.38 ± 0.75
facebook/xlm-v-base86.0086.2586.2287.0586.3486.37 ± 0.4
Ukrainian ELECTRA (1M)88.1687.9688.3988.1487.6888.07 ± 0.26

Model usage

The Ukrainian ELECTRA model can be used from the lang-uk Hugging Face model hub page.

As ELECTRA is trainined with an generator and discriminator model, both models are available. The generator model is usually used for masked language modeling, whereas the discriminator model is used for fine-tuning on downstream tasks like token or sequence classification.

The following model names can be used:

Example usage with 🤗 Transformers:

from transformers import AutoModel, AutoTokenizer

model_name = "lang-uk/electra-base-ukrainian-cased-generator"

tokenizer = AutoTokenizer.from_pretrained(model_name)

model = AutoModelWithLMHead.from_pretrained(model_name)

License

All models are licensed under MIT.

Contact (Bugs, Feedback, Contribution and more)

For questions about our Ukrainian ELECTRA model just open an issue in this repo 🤗

Citation

You can use the following BibTeX entry for citation:

@software{stefan_schweter_2020_4267880,
  author       = {Stefan Schweter},
  title        = {Ukrainian ELECTRA model},
  month        = nov,
  year         = 2020,
  publisher    = {Zenodo},
  version      = {1.0.0},
  doi          = {10.5281/zenodo.4267880},
  url          = {https://doi.org/10.5281/zenodo.4267880}
}

Acknowledgments

Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC). Thanks for providing access to the TRC ❤️

Thanks to the generous support from the Hugging Face team, it is possible to download both cased and uncased models from their S3 storage 🤗