Home

Awesome

Polish NLP resources

This repository contains pre-trained models and language resources for Natural Language Processing in Polish created during my research. Some of the models are also available on Huggingface Hub.

If you'd like to use any of those resources in your research please cite:

@Misc{polish-nlp-resources,
  author =       {S{\l}awomir Dadas},
  title =        {A repository of Polish {NLP} resources},
  howpublished = {Github},
  year =         {2019},
  url =          {https://github.com/sdadas/polish-nlp-resources/}
}

Contents

Word embeddings

The following section includes pre-trained word embeddings for Polish. Each model was trained on a corpus consisting of Polish Wikipedia dump, Polish books and articles, 1.5 billion tokens at total.

Word2Vec

Word2Vec trained with Gensim. 100 dimensions, negative sampling, contains lemmatized words with 3 or more ocurrences in the corpus and additionally a set of pre-defined punctuation symbols, all numbers from 0 to 10'000, Polish forenames and lastnames. The archive contains embedding in gensim binary format. Example of usage:

from gensim.models import KeyedVectors

if __name__ == '__main__':
    word2vec = KeyedVectors.load("word2vec_100_3_polish.bin")
    print(word2vec.similar_by_word("bierut"))
    
# [('cyrankiewicz', 0.818274736404419), ('gomułka', 0.7967918515205383), ('raczkiewicz', 0.7757788896560669), ('jaruzelski', 0.7737460732460022), ('pużak', 0.7667238712310791)]

Download (GitHub)

FastText

FastText trained with Gensim. Vocabulary and dimensionality is identical to Word2Vec model. The archive contains embedding in gensim binary format. Example of usage:

from gensim.models import KeyedVectors

if __name__ == '__main__':
    word2vec = KeyedVectors.load("fasttext_100_3_polish.bin")
    print(word2vec.similar_by_word("bierut"))
    
# [('bieruty', 0.9290274381637573), ('gierut', 0.8921363353729248), ('bieruta', 0.8906412124633789), ('bierutow', 0.8795544505119324), ('bierutowsko', 0.839280366897583)]

Download (OneDrive)

GloVe

Global Vectors for Word Representation (GloVe) trained using the reference implementation from Stanford NLP. 100 dimensions, contains lemmatized words with 3 or more ocurrences in the corpus. Example of usage:

from gensim.models import KeyedVectors

if __name__ == '__main__':
    word2vec = KeyedVectors.load_word2vec_format("glove_100_3_polish.txt")
    print(word2vec.similar_by_word("bierut"))
    
# [('cyrankiewicz', 0.8335597515106201), ('gomułka', 0.7793121337890625), ('bieruta', 0.7118682861328125), ('jaruzelski', 0.6743760108947754), ('minc', 0.6692837476730347)]

Download (GitHub)

High dimensional word vectors

Pre-trained vectors using the same vocabulary as above but with higher dimensionality. These vectors are more suitable for representing larger chunks of text such as sentences or documents using simple word aggregation methods (averaging, max pooling etc.) as more semantic information is preserved this way.

GloVe - 300d: Part 1 (GitHub), 500d: Part 1 (GitHub) Part 2 (GitHub), 800d: Part 1 (GitHub) Part 2 (GitHub) Part 3 (GitHub)

Word2Vec - 300d (OneDrive), 500d (OneDrive), 800d (OneDrive)

FastText - 300d (OneDrive), 500d (OneDrive), 800d (OneDrive)

Compressed Word2Vec

This is a compressed version of the Word2Vec embedding model described above. For compression, we used the method described in Compressing Word Embeddings via Deep Compositional Code Learning by Shu and Nakayama. Compressed embeddings are suited for deployment on storage-poor devices such as mobile phones. The model weights 38MB, only 4.4% size of the original Word2Vec embeddings. Although the authors of the article claimed that compressing with their method doesn't hurt model performance, we noticed a slight but acceptable drop of accuracy when using compressed version of embeddings. Sample decoder class with usage:

import gzip
from typing import Dict, Callable
import numpy as np

class CompressedEmbedding(object):

    def __init__(self, vocab_path: str, embedding_path: str, to_lowercase: bool=True):
        self.vocab_path: str = vocab_path
        self.embedding_path: str = embedding_path
        self.to_lower: bool = to_lowercase
        self.vocab: Dict[str, int] = self.__load_vocab(vocab_path)
        embedding = np.load(embedding_path)
        self.codes: np.ndarray = embedding[embedding.files[0]]
        self.codebook: np.ndarray = embedding[embedding.files[1]]
        self.m = self.codes.shape[1]
        self.k = int(self.codebook.shape[0] / self.m)
        self.dim: int = self.codebook.shape[1]

    def __load_vocab(self, vocab_path: str) -> Dict[str, int]:
        open_func: Callable = gzip.open if vocab_path.endswith(".gz") else open
        with open_func(vocab_path, "rt", encoding="utf-8") as input_file:
            return {line.strip():idx for idx, line in enumerate(input_file)}

    def vocab_vector(self, word: str):
        if word == "<pad>": return np.zeros(self.dim)
        val: str = word.lower() if self.to_lower else word
        index: int = self.vocab.get(val, self.vocab["<unk>"])
        codes = self.codes[index]
        code_indices = np.array([idx * self.k + offset for idx, offset in enumerate(np.nditer(codes))])
        return np.sum(self.codebook[code_indices], axis=0)

if __name__ == '__main__':
    word2vec = CompressedEmbedding("word2vec_100_3.vocab.gz", "word2vec_100_3.compressed.npz")
    print(word2vec.vocab_vector("bierut"))

Download (GitHub)

Wikipedia2Vec

Wikipedia2Vec is a toolkit for learning joint representations of words and Wikipedia entities. We share Polish embeddings learned using a modified version of the library in which we added lemmatization and fixed some issues regarding parsing wiki dumps for languages other than English. Embedding models are available in sizes from 100 to 800 dimensions. A simple example:

from wikipedia2vec import Wikipedia2Vec

wiki2vec = Wikipedia2Vec.load("wiki2vec-plwiki-100.bin")
print(wiki2vec.most_similar(wiki2vec.get_entity("Bolesław Bierut")))
# (<Entity Bolesław Bierut>, 1.0), (<Word bierut>, 0.75790733), (<Word gomułka>, 0.7276504),
# (<Entity Krajowa Rada Narodowa>, 0.7081445), (<Entity Władysław Gomułka>, 0.7043667) [...]

Download embeddings: 100d, 300d, 500d, 800d.

Language models

ELMo

Embeddings from Language Models (ELMo) is a contextual embedding presented in Deep contextualized word representations by Peters et al. Sample usage with PyTorch below, for a more detailed instructions for integrating ELMo with your model please refer to the official repositories github.com/allenai/bilm-tf (Tensorflow) and github.com/allenai/allennlp (PyTorch).

from allennlp.commands.elmo import ElmoEmbedder

elmo = ElmoEmbedder("options.json", "weights.hdf5")
print(elmo.embed_sentence(["Zażółcić", "gęślą", "jaźń"]))

Download (GitHub)

RoBERTa

Language model for Polish based on popular transformer architecture. We provide weights for improved BERT language model introduced in RoBERTa: A Robustly Optimized BERT Pretraining Approach. We provide two RoBERTa models for Polish - base and large model. A summary of pre-training parameters for each model is shown in the table below. We release two version of the each model: one in the Fairseq format and the other in the HuggingFace Transformers format. More information about the models can be found in a separate repository.

<table> <thead> <th>Model</th> <th>L / H / A*</th> <th>Batch size</th> <th>Update steps</th> <th>Corpus size</th> <th>Fairseq</th> <th>Transformers</th> </thead> <tr> <td>RoBERTa&nbsp;(base)</td> <td>12&nbsp;/&nbsp;768&nbsp;/&nbsp;12</td> <td>8k</td> <td>125k</td> <td>~20GB</td> <td> <a href="https://github.com/sdadas/polish-roberta/releases/download/models/roberta_base_fairseq.zip">v0.9.0</a> </td> <td> <a href="https://github.com/sdadas/polish-roberta/releases/download/models-transformers-v3.4.0/roberta_base_transformers.zip">v3.4</a> </td> </tr> <tr> <td>RoBERTa&#8209;v2&nbsp;(base)</td> <td>12&nbsp;/&nbsp;768&nbsp;/&nbsp;12</td> <td>8k</td> <td>400k</td> <td>~20GB</td> <td> <a href="https://github.com/sdadas/polish-roberta/releases/download/models-v2/roberta_base_fairseq.zip">v0.10.1</a> </td> <td> <a href="https://github.com/sdadas/polish-roberta/releases/download/models-v2/roberta_base_transformers.zip">v4.4</a> </td> </tr> <tr> <td>RoBERTa&nbsp;(large)</td> <td>24&nbsp;/&nbsp;1024&nbsp;/&nbsp;16</td> <td>30k</td> <td>50k</td> <td>~135GB</td> <td> <a href="https://github.com/sdadas/polish-roberta/releases/download/models/roberta_large_fairseq.zip">v0.9.0</a> </td> <td> <a href="https://github.com/sdadas/polish-roberta/releases/download/models-transformers-v3.4.0/roberta_large_transformers.zip">v3.4</a> </td> </tr> <tr> <td>RoBERTa&#8209;v2&nbsp;(large)</td> <td>24&nbsp;/&nbsp;1024&nbsp;/&nbsp;16</td> <td>2k</td> <td>400k</td> <td>~200GB</td> <td> <a href="https://github.com/sdadas/polish-roberta/releases/download/models-v2/roberta_large_fairseq.zip">v0.10.2</a> </td> <td> <a href="https://github.com/sdadas/polish-roberta/releases/download/models-v2/roberta_large_transformers.zip">v4.14</a> </td> </tr> </tr> <tr> <td>DistilRoBERTa</td> <td>6&nbsp;/&nbsp;768&nbsp;/&nbsp;12</td> <td>1k</td> <td>10ep.</td> <td>~20GB</td> <td> n/a </td> <td> <a href="https://github.com/sdadas/polish-roberta/releases/download/models-v2/distilroberta_transformers.zip">v4.13</a> </td> </tr> </table>

* L - the number of encoder blocks, H - hidden size, A - the number of attention heads <br/>

Example in Fairseq:

import os
from fairseq.models.roberta import RobertaModel, RobertaHubInterface
from fairseq import hub_utils

model_path = "roberta_large_fairseq"
loaded = hub_utils.from_pretrained(
    model_name_or_path=model_path,
    data_name_or_path=model_path,
    bpe="sentencepiece",
    sentencepiece_vocab=os.path.join(model_path, "sentencepiece.bpe.model"),
    load_checkpoint_heads=True,
    archive_map=RobertaModel.hub_models(),
    cpu=True
)
roberta = RobertaHubInterface(loaded['args'], loaded['task'], loaded['models'][0])
roberta.eval()
roberta.fill_mask('Druga wojna światowa zakończyła się w <mask> roku.', topk=1)
roberta.fill_mask('Ludzie najbardziej boją się <mask>.', topk=1)
#[('Druga wojna światowa zakończyła się w 1945 roku.', 0.9345270991325378, ' 1945')]
#[('Ludzie najbardziej boją się śmierci.', 0.14140743017196655, ' śmierci')]

It is recommended to use the above models, but it is still possible to download our old model, trained on smaller batch size (2K) and smaller corpus (15GB).

BART

BART is a transformer-based sequence to sequence model trained with a denoising objective. Can be used for fine-tuning on prediction tasks, just like regular BERT, as well as various text generation tasks such as machine translation, summarization, paraphrasing etc. We provide a Polish version of BART base model, trained on a large corpus of texts extracted from Common Crawl (200+ GB). More information on the BART architecture can be found in BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. Example in HugginFace Transformers:

import os
from transformers import BartForConditionalGeneration, PreTrainedTokenizerFast

model_dir = "bart_base_transformers"
tok = PreTrainedTokenizerFast(tokenizer_file=os.path.join(model_dir, "tokenizer.json"))
model = BartForConditionalGeneration.from_pretrained(model_dir)
sent = "Druga<mask>światowa zakończyła się w<mask>roku kapitulacją hitlerowskich<mask>"
batch = tok(sent, return_tensors='pt')
generated_ids = model.generate(batch['input_ids'])
print(tok.batch_decode(generated_ids, skip_special_tokens=True))
# ['Druga wojna światowa zakończyła się w 1945 roku kapitulacją hitlerowskich Niemiec.']

Download for Fairseq v0.10 or HuggingFace Transformers v4.0.

GPT-2

GPT-2 is a unidirectional transformer-based language model trained with an auto-regressive objective, originally introduced in the Language Models are Unsupervised Multitask Learners paper. The original English GPT-2 was released in four sizes differing by the number of parameters: Small (112M), Medium (345M), Large (774M), XL (1.5B).

Models for Huggingface Transformers

We provide Polish GPT-2 models for Huggingface Transformers. The models have been trained using Megatron-LM library and then converted to the Huggingface format. The released checkpoints support longer contexts than the original GPT-2 by OpenAI. Small and medium models support up to 2048 tokens, twice as many as GPT-2 models and the same as GPT-3. Large and XL models support up to 1536 tokens. Example in Transformers:

from transformers import pipeline

generator = pipeline("text-generation",  model="sdadas/polish-gpt2-medium")
results = generator("Policja skontrolowała trzeźwość kierowców",
  max_new_tokens=1024,  do_sample=True, repetition_penalty = 1.2,
  num_return_sequences=1, num_beams=1,  temperature=0.95,top_k=50, top_p=0.95
)
print(results[0].get("generated_text"))
# Policja skontrolowała trzeźwość kierowców. Teraz policjanci przypominają kierowcom o zachowaniu 
# bezpiecznej odległości i środkach ostrożności związanych z pandemią. - Kierujący po spożyciu 
# alkoholu są bardziej wyczuleni na innych uczestników ruchu drogowego oraz mają większą skłonność 
# do brawury i ryzykownego zachowania zwłaszcza wobec pieszych. Dodatkowo nie zawsze pamiętają oni 
# zasady obowiązujących u nas przepisów prawa regulujących kwestie dotyczące odpowiedzialności [...]

Small, Medium, Large, and XL models are available on the Huggingface Hub

Models for Fairseq

We provide Polish versions of the medium and large GPT-2 models trained using Fairseq library. Example in Fairseq:

import os
from fairseq import hub_utils
from fairseq.models.transformer_lm import TransformerLanguageModel

model_dir = "gpt2_medium_fairseq"
loaded = hub_utils.from_pretrained(
    model_name_or_path=model_dir,
    checkpoint_file="model.pt",
    data_name_or_path=model_dir,
    bpe="hf_byte_bpe",
    bpe_merges=os.path.join(model_dir, "merges.txt"),
    bpe_vocab=os.path.join(model_dir, "vocab.json"),
    load_checkpoint_heads=True,
    archive_map=TransformerLanguageModel.hub_models()
)
model = hub_utils.GeneratorHubInterface(loaded["args"], loaded["task"], loaded["models"])
model.eval()
result = model.sample(
    ["Policja skontrolowała trzeźwość kierowców"],
    beam=5, sampling=True, sampling_topk=50, sampling_topp=0.95,
    temperature=0.95, max_len_a=1, max_len_b=100, no_repeat_ngram_size=3
)
print(result[0])
# Policja skontrolowała trzeźwość kierowców pojazdów. Wszystko działo się na drodze gminnej, między Radwanowem 
# a Boguchowem. - Około godziny 12.30 do naszego komisariatu zgłosił się kierowca, którego zaniepokoiło 
# zachowanie kierującego w chwili wjazdu na tą drogę. Prawdopodobnie nie miał zapiętych pasów - informuje st. asp. 
# Anna Węgrzyniak z policji w Brzezinach. Okazało się, że kierujący był pod wpływem alkoholu. [...]

Download medium or large model for Fairseq v0.10.

Longformer

One of the main constraints of standard Transformer architectures is the limitation on the number of input tokens. There are several known models that allow processing of long documents, one of the popular ones being Longformer, introduced in the paper Longformer: The Long-Document Transformer. We provide base and large versions of Polish Longformer model. The models were initialized with Polish RoBERTa (v2) weights and then fine-tuned on a corpus of long documents, ranging from 1024 to 4096 tokens. Example in Huggingface Transformers:

from transformers import pipeline
fill_mask = pipeline('fill-mask', model='sdadas/polish-longformer-base-4096')
fill_mask('Stolica oraz największe miasto Francji to <mask>.')

Base and large models are available on the Huggingface Hub

Text encoders

The purpose of text encoders is to produce a fixed-length vector representation for chunks of text, such as sentences or paragraphs. These models are used in semantic search, question answering, document clustering, dataset augmentation, plagiarism detection, and other tasks which involve measuring semantic similarity or relatedness between text passages.

Paraphrase mining and semantic textual similarity

We share two models based on the Sentence-Transformers library, trained using distillation method described in the paper Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation. A corpus of 100 million parallel Polish-English sentence pairs from the OPUS project was used to train the models. You can download them from the Hugginface Hub using the links below.

<table> <thead> <th>Student model</th> <th>Teacher model</th> <th>Download</th> </thead> <tr> <td>polish-roberta-base-v2</td> <td>paraphrase-distilroberta-base-v2</td> <td><a href="https://huggingface.co/sdadas/st-polish-paraphrase-from-distilroberta">st-polish-paraphrase-from-distilroberta</a></td> </tr> <tr> <td>polish-roberta-base-v2</td> <td>paraphrase-mpnet-base-v2</td> <td><a href="https://huggingface.co/sdadas/st-polish-paraphrase-from-mpnet">st-polish-paraphrase-from-mpnet</a></td> </tr> </table>

A simple example in Sentence-Transformers library:

from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim

sentences = ["Bardzo lubię jeść słodycze.", "Uwielbiam zajadać się słodkościami."]
model = SentenceTransformer("sdadas/st-polish-paraphrase-from-mpnet")
results = model.encode(sentences, convert_to_tensor=True, show_progress_bar=False)
print(cos_sim(results[0], results[1]))
# tensor([[0.9794]], device='cuda:0')

MMLW

MMLW (muszę mieć lepszą wiadomość) is a set of text encoders trained using multilingual knowledge distillation method on a diverse corpus of 60 million Polish-English text pairs, which included both sentence and paragraph aligned translations. The encoders are available in Sentence-Transformers format. We used a two-step process to train the models. In the first step, the encoders were initialized with Polish RoBERTa and multilingual E5 checkpoints, and then distilled utilising English BGE as a teacher model. The resulting models from the distillation step can be used as general-purpose embeddings with applications in various tasks such as text similarity, document clustering, or fuzzy deduplication. The second step involved fine-tuning the obtained models on Polish MS MARCO dataset with contrastrive loss. The second stage models are adapted specifically for information retrieval tasks.

We provide a total of ten text encoders, five distilled and five fine-tuned for information retrieval. In the table below, we present the details of the released models.

<table> <thead> <tr> <th colspan="2">Base models</th> <th colspan="2">Stage 1: Distilled models</th> <th colspan="2">Stage 2: Retrieval models</th> </tr> <tr> <th>Student model</th> <th>Teacher model</th> <th><a href="https://huggingface.co/spaces/mteb/leaderboard">PL-MTEB</a><br/>Score</th> <th>Download</th> <th><a href="https://huggingface.co/spaces/sdadas/pirb">PIRB</a><br/>NDCG@10</th> <th>Download</th> </tr> </thead> <tr> <td colspan="6"><strong>Encoders based on Polish RoBERTa</strong></td> </tr> <tr> <td><a href="https://huggingface.co/sdadas/polish-roberta-base-v2">polish-roberta-base-v2</a></td> <td><a href="https://huggingface.co/BAAI/bge-base-en">bge-base-en</a></td> <td>61.05</td> <td><a href="https://huggingface.co/sdadas/mmlw-roberta-base">mmlw-roberta-base</a></td> <td>56.38</td> <td><a href="https://huggingface.co/sdadas/mmlw-retrieval-roberta-base">mmlw-retrieval-roberta-base</a></td> </tr> <tr> <td><a href="https://huggingface.co/sdadas/polish-roberta-large-v2">polish-roberta-large-v2</a></td> <td><a href="https://huggingface.co/BAAI/bge-large-en">bge-large-en</a></td> <td>63.23</td> <td><a href="https://huggingface.co/sdadas/mmlw-roberta-large">mmlw-roberta-large</a></td> <td>58.46</td> <td><a href="https://huggingface.co/sdadas/mmlw-retrieval-roberta-large">mmlw-retrieval-roberta-large</a></td> </tr> <tr> <td colspan="6"><strong>Encoders based on Multilingual E5</strong></td> </tr> <tr> <td><a href="https://huggingface.co/intfloat/multilingual-e5-small">multilingual-e5-small</a></td> <td><a href="https://huggingface.co/BAAI/bge-small-en">bge-small-en</a></td> <td>55.84</td> <td><a href="https://huggingface.co/sdadas/mmlw-e5-small">mmlw-e5-small</a></td> <td>52.34</td> <td><a href="https://huggingface.co/sdadas/mmlw-retrieval-e5-small">mmlw-retrieval-e5-small</a></td> </tr> <tr> <td><a href="https://huggingface.co/intfloat/multilingual-e5-base">multilingual-e5-base</a></td> <td><a href="https://huggingface.co/BAAI/bge-base-en">bge-base-en</a></td> <td>59.71</td> <td><a href="https://huggingface.co/sdadas/mmlw-e5-base">mmlw-e5-base</a></td> <td>56.09</td> <td><a href="https://huggingface.co/sdadas/mmlw-retrieval-e5-base">mmlw-retrieval-e5-base</a></td> </tr> <tr> <td><a href="https://huggingface.co/intfloat/multilingual-e5-large">multilingual-e5-large</a></td> <td><a href="https://huggingface.co/BAAI/bge-large-en">bge-large-en</a></td> <td>61.17</td> <td><a href="https://huggingface.co/sdadas/mmlw-e5-large">mmlw-e5-large</a></td> <td>58.30</td> <td><a href="https://huggingface.co/sdadas/mmlw-retrieval-e5-large">mmlw-retrieval-e5-large</a></td> </tr> </table>

Please note that the developed models require the use of specific prefixes and suffixes when encoding texts. For RoBERTa-based encoders, each query should be preceded by the prefix "zapytanie: ", and no prefix is needed for passages. For E5-based models, queries should be prefixed with "query: " and passages with "passage: ". An example of how to use the models:

from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim

query_prefix = "zapytanie: "      # "zapytanie: " for roberta, "query: " for e5
answer_prefix = ""                # empty for roberta, "passage: " for e5
queries = [query_prefix + "Jak dożyć 100 lat?"]
answers = [
    answer_prefix + "Trzeba zdrowo się odżywiać i uprawiać sport.",
    answer_prefix + "Trzeba pić alkohol, imprezować i jeździć szybkimi autami.",
    answer_prefix + "Gdy trwała kampania politycy zapewniali, że rozprawią się z zakazem niedzielnego handlu."
]
model = SentenceTransformer("sdadas/mmlw-retrieval-roberta-base")
queries_emb = model.encode(queries, convert_to_tensor=True, show_progress_bar=False)
answers_emb = model.encode(answers, convert_to_tensor=True, show_progress_bar=False)

best_answer = cos_sim(queries_emb, answers_emb).argmax().item()
print(answers[best_answer])
# Trzeba zdrowo się odżywiać i uprawiać sport.

Machine translation models

This section includes pre-trained machine translation models.

Convolutional models for Fairseq

We provide Polish-English and English-Polish convolutional neural machine translation models trained using Fairseq sequence modeling toolkit. Both models were trained on a parallel corpus of more than 40 million sentence pairs taken from Opus collection. Example of usage (fairseq, sacremoses and subword-nmt python packages are required to run this example):

from fairseq.models import BaseFairseqModel

model_path = "/polish-english/"
model = BaseFairseqModel.from_pretrained(
    model_name_or_path=model_path,
    checkpoint_file="checkpoint_best.pt",
    data_name_or_path=model_path,
    tokenizer="moses",
    bpe="subword_nmt",
    bpe_codes="code",
    cpu=True
)
print(model.translate(sentence="Zespół astronomów odkrył w konstelacji Panny niezwykłą planetę.", beam=5))
# A team of astronomers discovered an extraordinary planet in the constellation of Virgo.

Polish-English convolutional model: Download (GitHub)
English-Polish convolutional model: Download (GitHub)

T5-based models

We share MT5 and Flan-T5 models fine-tuned for Polish-English and English-Polish translation. The models were trained on 70 million sentence pairs from OPUS. You can download them from the Hugginface Hub using the links below. An example of how to use the models:

from transformers import pipeline
generator = pipeline("translation", model="sdadas/flan-t5-base-translator-en-pl")
sentence = "A team of astronomers discovered an extraordinary planet in the constellation of Virgo."
print(generator(sentence, max_length=512))
# [{'translation_text': 'Zespół astronomów odkrył niezwykłą planetę w gwiazdozbiorze Panny.'}]

The following models are available on the Huggingface Hub: mt5-base-translator-en-pl, mt5-base-translator-pl-en, flan-t5-base-translator-en-pl

Fine-tuned models

ByT5-text-correction

A small multilingual utility model intended for simple text correction. It is designed to improve the quality of texts from the web, often lacking punctuation or proper word capitalization. The model was trained to perform three types of corrections: restoring punctuation in sentences, restoring word capitalization, and restoring diacritical marks for languages that include them.

The following languages are supported: Belarusian (be), Danish (da), German (de), Greek (el), English (en), Spanish (es), French (fr), Italian (it), Dutch (nl), Polish (pl), Portuguese (pt), Romanian (ro), Russian (ru), Slovak (sk), Swedish (sv), Ukrainian (uk). The model takes as input a sentence preceded by a language code prefix. For example:

from transformers import pipeline
generator = pipeline("text2text-generation", model="sdadas/byt5-text-correction")
sentences = [
    "<pl> ciekaw jestem na co licza onuce stawiajace na sykulskiego w nadziei na zwrot ku rosji",
    "<de> die frage die sich die europäer stellen müssen lautet ist es in unserem interesse die krise auf taiwan zu beschleunigen",
    "<ru> при своём рождении 26 августа 1910 года тереза получила имя агнес бояджиу"
]
generator(sentences, max_length=512)
# Ciekaw jestem na co liczą onuce stawiające na Sykulskiego w nadziei na zwrot ku Rosji.
# Die Frage, die sich die Europäer stellen müssen, lautet: Ist es in unserem Interesse, die Krise auf Taiwan zu beschleunigen?
# При своём рождении 26 августа 1910 года Тереза получила имя Агнес Бояджиу.

The model is available on the Huggingface Hub: byt5-text-correction

Text ranking models

We provide a set of text ranking models than can be used in the reranking phase of retrieval augmented generation (RAG) pipelines. Our goal was to build efficient models that combine high accuracy with relatively low computational complexity. We employed Polish RoBERTa language models, fine-tuning them for text ranking task on a large dataset consisting of 1.4 million queries and 10 million documents. The models were trained using two knowledge distillation methods: a standard technique based on mean squared loss (MSE) and the RankNet algorithm that enforces sorting lists of documents according to their relevance to the query. The RankNet metod has proven to be more effective. Below is a summary of the released models:

<table> <thead> <th>Model</th> <th>Parameters</th> <th>Training method</th> <th><a href="https://huggingface.co/spaces/sdadas/pirb">PIRB</a><br/>NDCG@10</th> </thead> <tr> <td><a href="https://huggingface.co/sdadas/polish-reranker-base-ranknet">polish-reranker-base-ranknet</a></td> <td>124M</td> <td>RankNet</td> <td>60.32</td> </tr> <tr> <td><a href="https://huggingface.co/sdadas/polish-reranker-large-ranknet">polish-reranker-large-ranknet</a></td> <td>435M</td> <td>RankNet</td> <td>62.65</td> </tr> <tr> <td><a href="https://huggingface.co/sdadas/polish-reranker-base-mse">polish-reranker-base-mse</a></td> <td>124M</td> <td>MSE</td> <td>57.50</td> </tr> <tr> <td><a href="https://huggingface.co/sdadas/polish-reranker-large-mse">polish-reranker-large-mse</a></td> <td>435M</td> <td>MSE</td> <td>60.27</td> </tr> </table>

The models can be used with sentence-transformers library:

from sentence_transformers import CrossEncoder
import torch.nn

query = "Jak dożyć 100 lat?"
answers = [
    "Trzeba zdrowo się odżywiać i uprawiać sport.",
    "Trzeba pić alkohol, imprezować i jeździć szybkimi autami.",
    "Gdy trwała kampania politycy zapewniali, że rozprawią się z zakazem niedzielnego handlu."
]

model = CrossEncoder(
    "sdadas/polish-reranker-large-ranknet",
    default_activation_function=torch.nn.Identity(),
    max_length=512,
    device="cuda" if torch.cuda.is_available() else "cpu"
)
pairs = [[query, answer] for answer in answers]
results = model.predict(pairs)
print(results.tolist())

Dictionaries and lexicons

Polish, English and foreign person names

This lexicon contains 346 thousand forenames and lastnames labeled as Polish, English or Foreign (other) crawled from multiple Internet sources. Possible labels are: P-N (Polish forename), P-L (Polish lastname), E-N (English forename), E-L (English lastname), F (foreign / other). For each word, there is an additional flag indicating whether this name is also used as a common word in Polish (C for common, U for uncommon).

Download (GitHub)

Named entities extracted from SJP.PL

This dictionary consists mostly of the names of settlements, geographical regions, countries, continents and words derived from them (relational adjectives and inhabitant names). Besides that, it also contains names of popular brands, companies and common abbreviations of institutions' names. This resource was created in a semi-automatic way, by extracting the words and their forms from SJP.PL using a set of heuristic rules and then manually filtering out words that weren't named entities.

Download (GitHub)

Links to external resources

Repositories of linguistic tools and resources

Publicly available large Polish text corpora (> 1GB)

Models supporting Polish language

Sentence analysis (tokenization, lemmatization, POS tagging etc.)

Machine translation

Language models

Sentence encoders

Optical character recognition (OCR)

Speech processing (speech recognition, text-to-speech, voice cloning etc.)

Multimodal models