Home

Awesome

GermLM

Exploring Multilingual Language Models and their effectinves for Named Entity Recognition (NER) in German and English.

Requierements

Requierements can be installed via pip using the requierements.txt. We use

It is recommended to run the experiments on at least 1 GPU. Our experiments were conducted using 2. The prediction is run on CPU only.

NER experiments

We uset Google's BERT model (english bert base and multilingual bert base, both cased) and evaluate them on the [CoNLL-2003] NER dataset.

Create the appropriate datasets using the makefile

Run run_ner.py. Usage (listing the most important options) :

(Example) Replicating English BERT NER experiment

Create the dataset:

make dataset-engI

Train the NER model:

python run_ner.py --do-train --do-eval --lr=3e-5 --batch-size=16 --epochs=4 --bertAdam --dataset=data/conll-2003-I/

[DEMO] Use your trained model for NER

If you use run_ner.py with the save flag, the saved model can be loaded in predict.py and it will recognise the named entities of the senteces provided. Note, you just need to proved the file name, the learner will automatically look for it in it's directory and append to correct extension.

python predict.py eng_3_model

Example output:

Loading model...
Lang: eng
Model: bert-base-cased
Run: eng_3_model
Done
Enter sentence: Antonia goes to Trinity College Dublin, in Ireland.
input:  ['[CLS]', 'Anton', '##ia', 'goes', 'to', 'Trinity', 'College', 'Dublin', ',', 'in', 'Ireland', '.', '[SEP]']
tensor([0, 4, 0, 1, 1, 5, 5, 5, 1, 1, 2, 1, 0])
Named Entities
Antonia I-PER
goes O
to O
Trinity I-ORG
College I-ORG
Dublin, I-ORG
in O
Ireland. I-LOC
Enter sentence: ...

Fine-tuning experiments

We apply the LM fine-tuning methods from ULMFIT to the BERT model, in order to boost performance. It does not work.

LM - pretraining

Use conl_to_docs from ner_data.py to convert the trainings set to a document of sentences.

Use the output file you specified as input to the data generation:

make 2bert DIR='data/conll-2003/eng/' M='bert-base-cased' E=20

Then fine-tune the language model on the task data:

make pretrain_lm FILE='lm_finetune.py' DIR='data/conll-2003/deu/' M='bert-base-multilingual-cased' E=20 

Task-finetuning

Learnig rates were selected using the jupter notebooks.

Run task-finetuning.py to fine-tuning using the tuning methods from ULMFIT. Add tuned_learner to load the fine-tuned LM:

python task-finetuning.py --batch-size=16 --epochs=4 --lr=5e-5 --do-train --do-eval --dataset=data/conll-2003-I/ --lang=deu --tuned-learner='pretrain/pytorch_fastai_model_i_bert-base-multilingual-cased_10.bin'

Results

English

modeldatsetdev f1test f1
BERT Large-96.692.8
BERT Base-96.492.4
English BERT (ours)IOB196.492.6
"BIO95.692.2
Mutlilingual BERT (ours)IOB196.491.9
"BIO96.592.1

German

modeldatsetdev f1test f1
Ahmed & MehlerIOB1-83.64
Riedl & Pado--84.73
Mutlilingual BERT (ours)IOB188.4485.81
"BIO87.4984.98

Fine-tuning showed no improvement, the results stayed about the same.

File overview: