Awesome
An Effective Domain Adaptive Post-Training Method for BERT in Response Selection
Implements the model described in the following paper An Effective Domain Adaptive Post-Training Method for BERT in Response Selection.
@inproceedings{whang2020domain,
author={Whang, Taesun and Lee, Dongyub and Lee, Chanhee and Yang, Kisu and Oh, Dongsuk and Lim, HeuiSeok},
title="An Effective Domain Adaptive Post-Training Method for BERT in Response Selection",
year=2020,
booktitle={Proc. Interspeech 2020}
}
This code is reimplemented as a fork of huggingface/transformers.
<p align="center"> <img src="model_overview.jpg" width="500"/> </p>Data Creation
- Download
ubuntu_train.pkl, ubuntu_valid.pkl, ubuntu_test.pkl
here or you can createpkl
files to train response selection model based on BERT model. If you wish to create pkl, download ubuntu_corpus_v1 dataset here provided by Xu et al. (2016) and keep the files underdata/ubuntu_corpus_v1
directory. - Ubuntu corpus for domain post trianing will be created by running:
python data/data_utils.py
Post Training Data Creation
Download ubuntu_post_training.txt
corpus here and simply run
python data/create_bert_post_training_data.py
After creating post_training data, keep ubuntu_post_training.hdf5
file under data/ubuntu_corpus_v1
directory.
Domain Post Training BERT
To domain post-train BERT, simply run
python main.py --model bert_ubuntu_pt --train_type post_training --bert_pretrained bert-base-uncased --data_dir ./data/ubuntu_corpus_v1/ubuntu_post_training.hdf5
BERT Fine-tuning (Response Selection)
Training
Train a response selection model based on BERT_base
:
python main.py --model bert_base_ft --train_type fine_tuning --bert_pretrained bert-base-uncased
Train a response selection model based on Domain post-trained BERT
. If you wish to get the domain post trained BERT, download model checkpoint (bert-post-uncased-pytorch_model.pth
) here,
and keep checkpoint under resources/bert-post-uncased
directory:
python main.py --model bert_dpt_ft --train_type fine_tuning --bert_pretrained bert-post-uncased
Evaluation
To evaluate bert_base
,bert_dpt
models, set a model checkpoint path and simply run
python main.py --model bert_dpt_ft --train_type fine_tuning --bert_pretrained bert-post-uncased --evaluate /path/to/checkpoint.pth
If you wish to get the pre-trained response selection model, we provide the model checkpoints below.
Model | R@1 | R@2 | R@5 | MRR |
---|---|---|---|---|
BERT_base | 0.8115 | 0.9003 | 0.9768 | 0.8809 |
BERT_DPT | 0.8515 | 0.9272 | 0.9851 | 0.9081 |
Acknowledgements
- This work was supported by Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (no. 2016-0-00010-003, Digital Centent InHouse R&D)
- Work in collaboration with Kakao Corp.