Home

Awesome

PWC PWC PWC

MultiFiT: Efficient Multi-lingual Language Model Fine-tuning

Code to reproduce the paper "MultiFiT: Efficient Multi-lingual Language Model Fine-tuning".

Here is a blog post with an introducing to our paper: http://nlp.fast.ai/classification/2019/09/10/multifit.html

This repository contains a small framework on top of fastai v1.0; the code is compatible with v1.0.47 up to v1.0.59 (the current as of 2019.11.03). The results between fastai versions may differ due to optimizations added to fastai. Our models were trained using 1.0.47.

The framework was rewritten to make it easier to use with the newest fastai.

We released 7 language models trained on corresponding Wikipedia dumps:

To fetch the model just use multifit.from_pretrained function. Here are some example notebook showing how to train a classifier using a pretrained models.

Results

MLDoc

Document classification results on MLDoc dataset Schwenk and Li, 2018

Modeldeesfritjaruzh
LASER92.7088.7590.8085.9385.1584.6588.98
MultiBERT94.095.1593.2085.8287.4886.8590.72
MultiFiT95.9096.0794.7790.2590.0387.6592.52

Amazon CLS

Sentiment classification results on CLS dataset Prettenhofer and Stein, 2010

DEFRJA
MultiBERT86.05 / 84.90 / 82.0086.15 / 86.90 / 86.6580.87 / 82.83 / 79.95
MultiFiT93.19 / 90.54 / 93.0091.25 / 89.55 / 93.4086.29 / 85.75 / 86.59

How to use it with fastai v1.0

You can use the pretrained models with fastai library as follows:

from fastai.text import *
import multifit

exp = multifit.from_pretrained("name of the model")
fa_config =  exp.pretrain_lm.tokenizer.get_fastai_config(add_open_file_processor=True)
data_lm = (TextList.from_folder(imdb_path, **fa_config)
            .filter_by_folder(include=['train', 'test', 'unsup']) 
            .split_by_rand_pct(0.1)
            .label_for_lm()           
            .databunch(bs=bs))
learn = exp.finetune_lm.get_learner(data_lm)  
# learn is a preconfigured fastai learner with a pretrained model loaded
learn.fit_one_cycle(10)
learn.save_encoder("enc")
...

Reproducing the results

This repository is a rewrite of the original training scripts so it lacks all the scripts used in the paper. We are working on a port to fastai v2.0 and then we will be adding the scripts that show how to reproduce the results. In case you need to use the scripts faster you can access the original scripts here.

Citation

@article{Eisenschlos2019MultiFit,
  title={MultiFiT: Efficient Multi-lingual Language Model Fine-tuning},
  author={Julian Eisenschlos, Sebastian Ruder, Piotr Czapla, Marcin Kardas, Sylvain Gugger, Jeremy Howard}
  journal={Proceedings of EMNLP-IJCNLP 2019},
  year={2019}
}