Home

Awesome

CI status

Experimental Finnish language model for spaCy

Finnish language model for spaCy. The model does POS tagging, dependency parsing, word vectors, noun phrase extraction, word occurrence probability estimates, morphological features, lemmatization and named entity recognition (NER). The lemmatization is based on Voikko.

The main differences between this model and the Finnish language model in the spaCy core:

Want a hassle free installation? Install the spaCy core model. Need the highest possible accuracy especially for lemmatization? Install this model.

I'm planning to continue to experiment with new ideas on this repository and push the useful features to the spaCy core after testing them here.

The training data is web pages collected during 2014 - 2020, before the rise of the AI slop surge. The data does contain some regular spam and poorly machine-translated pages. I have made some effort to filter out the most conspicuous spam pages.

Install the Finnish language model

First, install the libvoikko native library and the Finnish morphology data files.

Next, install the model by running:

pip install spacy_fi_experimental_web_md

Compatibility with spaCy versions:

spacy-fi versionCompatible with spaCy versions
0.15.x3.8.x
0.14.03.7.x
0.13.03.6.x
0.12.03.5.x
0.11.03.4.x
0.10.03.3.x
0.9.0>= 3.2.1 and < 3.3.0
0.8.x3.2.x
0.7.x3.0.x, 3.1.x
0.6.03.0.x
0.5.03.0.x
0.4.x2.3.x

Usage

import spacy

nlp = spacy.load('spacy_fi_experimental_web_md')

doc = nlp('Hän ajoi punaisella autolla.')
for t in doc:
    print(f'{t.lemma_}\t{t.pos_}')

The dependency, part-of-speech and named entity labels are documented on a separate page.

Updating the model

Setting up a development environment

# Install the libvoikko native library with Finnish morphology data.
#
# This will install Voikko on Debian/Ubuntu.
# For other distros and operating systems, see https://voikko.puimula.org/python.html
sudo apt install libvoikko1 voikko-fi

python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt

Training the model

spacy project assets
spacy project run train-pipeline

Optional steps (slow!) for training certain model components. These steps are not necessarily required because the results of have been pre-computed and stored in git.

Train floret embeddings:

spacy project run floret-vectors

Pretrain tok2vec weights:

spacy project run pretrain

Plot the pretraining loss:

python tools/plot_pretrain_loss.py training/pretrain/log.jsonl

Testing

Unit tests:

python -m pytest tests/unit

Functional tests for a trained model:

python -m pytest tests/functional

Importing the trained model directly from the file system without packaging it as a module:

import spacy
import fi

nlp = spacy.load('training/merged')

doc = nlp('Hän ajoi punaisella autolla.')
for t in doc:
    print(f'{t.lemma_}\t{t.pos_}')

Packaging and publishing

See packaging.md.

License

MIT license

Licenses for the training data

The datasets used in training are licensed as follows: