Home

Awesome

ChineseEHRBert

A Chinese Electric Health Record Bert Pretrained Model.

中文版

cleaner

The cleaner is responsible for cleaning txt files, which is used for training a Chinese bert model. The cleaner split lines in original lines into small lines. Each small line is a complete sentence with a punctuation. This is required for training next sentence prediction task.

usage

cd ./cleaner/
python parser.py [-h] [--input INPUT] [--output OUTPUT] [-s] [--log LOG]

train

Pre-train a bert model with cleaned text. We should generate .tfrecord first, and pre-train with google's code. To notice, cleaner file may be too big to load in RAM. Our script splits these files and generate multiple .tfrecord.

usage

Split file and convert to .tfrecord

cd ./train/
python make_pretrain_bert.py [-h] [-f FILE_PATH] [-s SPLIT_LINE]
                             [-p SPLIT_PATH] [-o OUTPUT_PATH] [-l MAX_LENGTH]
                             [-b BERT_BASE_DIR]

One should change parameters for your specific requirement in pretrain128.sh and pretrain512.sh.

sh pretrain128.sh
sh pretrain512.sh

test

Test Chinese medical NLP tasks by BERT in one line! Two NER tasks, one QA task, one RE task and one sentence similarity task.

cd ./test/
sh run_test.sh

Tasks include CCKS2019NER, cMedQA2, Tianchi_NER, Tianchi_RE, ncov2019_sim.

Results

Results compared with original BERT and ChineseEHRBert. Results are preparing.

Citation

Author