Home

Awesome

HET-MC

This is the implementation of Summarizing Medical Conversations via Identifying Important Utterances at COLING 2020.

You can e-mail Yuanhe Tian at yhtian@uw.edu, if you have any questions.

🔥 News 🔥

We recently released a large language model for the Chinese medical domain named ChiMed-GPT, which is trained on the medical dialog data. For more information, please visit our GitHub Repo.

Citation

If you use or extend our work, please cite our paper at COLING 2020.

@inproceedings{song-etal-2020-summarizing,
    title = "Summarizing Medical Conversations via Identifying Important Utterances",
    author = "Song, Yan and Tian, Yuanhe and Wang, Nan and Xia, Fei",
    booktitle = "Proceedings of the 28th International Conference on Computational Linguistics",
    month = dec,
    year = "2020",
    address = "Barcelona, Spain (Online)",
    pages = "717--729",
}

Requirements

Our code works with the following environment.

Dataset

To obtain the data, you can go to data_preprocessing directory for details.

Downloading BERT, ZEN and HET-MC

In our paper, we use BERT (paper) and ZEN (paper) as the encoder.

For BERT, please download pre-trained BERT-Base Chinese from Google or from HuggingFace. If you download it from Google, you need to convert the model from TensorFlow version to PyTorch version.

For ZEN, you can download the pre-trained model from here.

For HET-MC, you can download the models we trained in our experiments from here (passcode: b1w1).

Run on Sample Data

Run run_sample.sh to train a model on the small sample data under the sample_data directory.

Training and Testing

You can find the command lines to train and test models in run.sh.

Here are some important parameters:

To-do List

You can leave comments in the Issues section, if you want us to implement any functions.