Awesome
IMCS-21
This repo contains a new corpus benchmark called IMCS-21 for automated medical consultation system, as well as the code for reproducing the experiments in our Bioinformatics 2022 paper A Benchmark for Automatic Medical Consultation System: Frameworks, Tasks and Datasets.
News
- We update IMCS-21 2.0 version in here !
- The test set of IMCS-21 is host on CBLEU at TIANCHI platform. See more details in https://github.com/lemuria-wchen/imcs21-cblue. Welcome to submit your results on CBLEU, or compare our results on the validation set.
- Please see more details in our Bioinformatics 2022 paper A Benchmark for Automatic Medical Consultation System: Frameworks, Tasks and Datasets.
- IMCS-21 is released, containing a total of 4,116 annotated medical consultation records that covers 10 pediatric diseases.
TODO
- Update the results of dev set for DDP task
- Detailed documents of instruction on DDP task
Overview of Experiments
We provide the code for most of the baseline models, all of which are based on python 3, and we provide the environment and running procedure for each baseline.
The baseline includes:
- NER task: Lattice LSTM, BERT, ERNIE, FLAT, LEBERT
- DAC task: TextCNN, TextRNN, TextRCNN, DPCNN, BERT, ERNIE
- SLI task: BERT-MLC, BERT-MTL
- MRG task: Seq2Seq, PG, Transformer, T5, ProphetNet
- DDP task: DQN, KQ-DQN, REFUEL, GAMP, HRL
Note:
- The results reported on the github are slightly different from those reported in the paper. This is because we retrained the model. For fair comparison, readers are recommended to either compare the results in the paper or compare the results reported in this document.
- If you need to compare the results on the dev set, these results are only available in this document.
Results of NER Task
To evaluate NER task, we use two types of metrics, entity-level and token-level. Due to space limitations, we only keep the results of token-level metrics in our paper.
For entity-level, we report token-level F1 score for each entity category, as well as the overall F1 score, following the setting of CoNLL-2003.
For token-level, we report the Precision, Recall and F1 score (micro).
The follow baseline codes are available:
<table> <thead> <tr> <th rowspan="2">Models</th> <th rowspan="2">Split</th> <th colspan="6">Entity-Level</th> <th colspan="3">Token-Level</th> </tr> <tr> <th>SX</th> <th>DN</th> <th>DC</th> <th>EX</th> <th>OP</th> <th>Overall</th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td rowspan="2">Lattice LSTM</td> <td>Dev</td> <td>90.61</td> <td>88.12</td> <td>90.89</td> <td>90.44</td> <td>91.14</td> <td>90.33</td> <td>89.62</td> <td>91.00</td> <td>90.31</td> </tr> <tr> <td>Test</td> <td>90.00</td> <td>87.84</td> <td>91.32</td> <td>90.55</td> <td>93.42</td> <td>90.10</td> <td>89.37</td> <td>90.84</td> <td>90.10</td> </tr> <tr> <td rowspan="2">BERT</td> <td>Dev</td> <td>91.15</td> <td>89.74</td> <td>90.97</td> <td>90.74</td> <td>92.57</td> <td>90.95</td> <td>88.99</td> <td>92.43</td> <td>90.68</td> </tr> <tr> <td>Test</td> <td>90.59</td> <td>89.97</td> <td>90.54</td> <td>90.48</td> <td>94.39</td> <td>90.64</td> <td>88.46</td> <td>92.35</td> <td>90.37</td> </tr> <tr> <td rowspan="2">ERNIE</td> <td>Dev</td> <td>91.28</td> <td>89.68</td> <td>90.92</td> <td>91.15</td> <td>92.65</td> <td>91.08</td> <td>89.36</td> <td>92.46</td> <td>90.88</td> </tr> <tr> <td>Test</td> <td>90.67</td> <td>89.89</td> <td>90.73</td> <td>90.97</td> <td>94.33</td> <td>90.78</td> <td>88.87</td> <td>92.27</td> <td>90.53</td> </tr> <tr> <td rowspan="2">FLAT</td> <td>Dev</td> <td>90.90</td> <td>89.95</td> <td>90.64</td> <td>90.58</td> <td>93.14</td> <td>90.80</td> <td>88.89</td> <td>92.23</td> <td>90.53</td> </tr> <tr> <td>Test</td> <td>90.45</td> <td>89.67</td> <td>90.35</td> <td>91.12</td> <td>93.47</td> <td>90.58</td> <td>88.76</td> <td>92.07</td> <td>90.38</td> </tr> <tr> <td rowspan="2">LEBERT</td> <td>Dev</td> <td>92.61</td> <td>90.67</td> <td>90.71</td> <td>92.39</td> <td>92.30</td> <td>92.11</td> <td>86.95</td> <td>93.05</td> <td>89.90</td> </tr> <tr> <td>Test</td> <td>92.14</td> <td>90.31</td> <td>91.16</td> <td>92.35</td> <td>93.94</td> <td>91.92</td> <td>86.53</td> <td>92.91</td> <td>89.60</td> </tr> </tbody> </table>Results of DAC Task
To evaluate DAC task, we report the Precision, Recall, F1 score (macro), as well as Accuracy.
The follow baseline codes are available:
<table> <thead> <tr> <th>Models</th> <th>Split</th> <th>P</th> <th>R</th> <th>F1</th> <th>Acc</th> </tr> </thead> <tbody> <tr> <td rowspan="2">TextCNN</td> <td>Dev</td> <td>73.09</td> <td>70.26</td> <td>71.26</td> <td>77.77</td> </tr> <tr> <td>Test</td> <td>74.02</td> <td>70.92</td> <td>72.22</td> <td>78.99</td> </tr> <tr> <td rowspan="2">TextRNN</td> <td>Dev</td> <td>74.02</td> <td>68.43</td> <td>70.71</td> <td>78.14</td> </tr> <tr> <td>Test</td> <td>73.07</td> <td>69.88</td> <td>70.96</td> <td>78.53</td> </tr> <tr> <td rowspan="2">TextRCNN</td> <td>Dev</td> <td>71.43</td> <td>72.68</td> <td>71.50</td> <td>77.67</td> </tr> <tr> <td>Test</td> <td>73.82</td> <td>72.53</td> <td>72.89</td> <td>79.40</td> </tr> <tr> <td rowspan="2">DPCNN</td> <td>Dev</td> <td>70.10</td> <td>70.91</td> <td>69.85</td> <td>77.14</td> </tr> <tr> <td>Test</td> <td>74.30</td> <td>69.45/td> <td>71.28</td> <td>78.75</td> </tr> <tr> <td rowspan="2">BERT</td> <td>Dev</td> <td>75.19</td> <td>76.31</td> <td>75.66</td> <td>81.00</td> </tr> <tr> <td>Test</td> <td>75.53</td> <td>77.24</td> <td>76.28</td> <td>81.65</td> </tr> <tr> <td rowspan="2">ERNIE</td> <td>Dev</td> <td>76.04</td> <td>76.82</td> <td>76.37</td> <td>81.60</td> </tr> <tr> <td>Test</td> <td>75.35</td> <td>77.16</td> <td>76.14</td> <td>81.62</td> </tr> </tbody> </table>Error Analysis of DAC Task
The visualization of the classification confusion matrix predicted by ERNIE model on the test set is demonstrated in the below figure. It can be seen that there are few classification errors in most utterance categories, except for OTHER
category.
Results of SLI Task
For the evaluation of SLI-EXP and SLI-IMP task, there is a little difference since SLI-IMP task needs to additionally predict the symptom label. We divide the evaluation process into two steps, the first step is to evaluate the performance of symptom recognition
, and the second step is to evaluate the performance of symptom (label) inference
.
For symptom recognition, it cares only whether the symptom entities are identified or not. We use metrics of multi-label classification that are widely explored in the paper A Unified View of Multi-Label Performance Measures. It includes example-based metrics: Subset Accuracy (SA), Hamming Loss (HL), Hamming Score (HS), and label-based metrics: Precision (P), Recall (R) and F1 score (F1) (micro).
For symptom label inference, it evaluates only on those symptoms that are correctly identified, about whether their label is correct or not. We report the F1 score for each symptom label (Positive, Negative and Not sure), as well as the overall F1 score (macro) and the accuracy.
The follow baseline codes are available:
NOTE: BERT-MLC are valid for SLI-EXP and SLI-IMP tasks, while BERT-MTL is valid only for SLI-IMP task.
<table> <thead> <tr> <th rowspan="2">Models</th> <th rowspan="2">Split</th> <th colspan="3">Example-based</th> <th colspan="3">Label-based</th> </tr> <tr> <th>SA</th> <th>HL</th> <th>HS</th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td colspan="8">SLI-EXP (Symptom Recognition)</td> </tr> <tr> <td rowspan="2">BERT-MLC</td> <td>Dev</td> <td>75.63</td> <td>10.12</td> <td>86.53</td> <td>86.50</td> <td>93.80</td> <td>90.00</td> </tr> <tr> <td>Test</td> <td>73.24</td> <td>10.10</td> <td>84.58</td> <td>86.33</td> <td>93.14</td> <td>89.60</td> </tr> <tr> <td colspan="8">SLI-IMP (Symptom Recognition)</td> </tr> <tr> <td rowspan="2">BERT-MLC</td> <td>Dev</td> <td>33.61</td> <td>40.87</td> <td>81.34</td> <td>85.03</td> <td>95.40</td> <td>89.91</td> </tr> <tr> <td>Test</td> <td>34.16</td> <td>39.52</td> <td>82.22</td> <td>84.98</td> <td>94.81</td> <td>89.63</td> </tr> <tr> <td rowspan="2">BERT-MTL</td> <td>Dev</td> <td>36.61</td> <td>38.12</td> <td>84.33</td> <td>95.83</td> <td>86.67</td> <td>91.02</td> </tr> <tr> <td>Test</td> <td>35.88</td> <td>38.77</td> <td>83.76</td> <td>96.11</td> <td>86.18</td> <td>90.88</td> </tr> <tr> <td colspan="8">SLI-IMP (Symptom Inference)</td> </tr> <tr> <td></td> <td></td> <td>POS</td> <td>NEG</td> <td>NS</td> <td>Overall</td> <td>Acc</td> <td></td> </tr> <tr> <td rowspan="2">BERT-MLC</td> <td>Dev</td> <td>81.85</td> <td>47.99</td> <td>58.42</td> <td>62.76</td> <td>72.84</td> <td></td> </tr> <tr> <td>Test</td> <td>81.25</td> <td>46.53</td> <td>59.14</td> <td>62.31</td> <td>71.99</td> <td></td> </tr> <tr> <td rowspan="2">BERT-MTL</td> <td>Dev</td> <td>79.83</td> <td>53.38</td> <td>60.94</td> <td>64.72</td> <td>71.38</td> <td></td> </tr> <tr> <td>Test</td> <td>79.64</td> <td>53.87</td> <td>60.20</td> <td>64.57</td> <td>71.08</td> <td></td> </tr> </tbody> </table>Results of MRG Task
In MRG task, we use the concatenation of all NON-OTHER categories of utterances to generate medical reports. During inference, the categories of utterances in the test set is pre-predicted by the trained ERNIE model of DAC task.
To evaluate MRG task, we report both BLEU and ROUGE score, i.e., BLEU-2/4 and ROUGE-1/2/L. We also report Concept F1 score (C-F1) to measure the model’s effectiveness in capturing the medical concepts that are of importance, and Regex-based Diagnostic Accuracy (RD-Acc), to measure the model’s ability to judge the disease.
The follow baseline codes are available:
NOTE: To calculate C-F1 score, the trained BERT model in NER task is utilized. See details in eval_ner_f1.py
. To calculate RD-Ac, a simple regex-based method is utilized, see details in eval_acc.py
.
Results of DDP Task
To evaluate DDP task, we report Symptom Recall (Rec), Diagnostic Accuracy (Acc) and the average number of interactions (# Turns).
The follow baseline codes are available:
NOTE: We use the open source implementation for all baselines, since none of these papers provide any official repos or codes.
<table> <thead> <tr> <th>Models</th> <th>Rec</th> <th>Acc</th> <th># Turns</th> </tr> </thead> <tbody> <tr> <td>DQN</td> <td>0.047</td> <td>0.408</td> <td>9.750</td> </tr> <tr> <td>REFUEL</td> <td>0.262</td> <td>0.505</td> <td>5.500</td> </tr> <tr> <td>KR-DQN</td> <td>0.279</td> <td>0.485</td> <td>5.950</td> </tr> <tr> <td>GAMP</td> <td>0.067</td> <td>0.500</td> <td>1.780</td> </tr> <tr> <td>HRL</td> <td>0.295</td> <td>0.556</td> <td>6.990</td> </tr> </tbody> </table>How to Cite
If you extend or use this work, please cite the paper where it was introduced.
@article{10.1093/bioinformatics/btac817,
author = {Chen, Wei and Li, Zhiwei and Fang, Hongyi and Yao, Qianyuan and Zhong, Cheng and Hao, Jianye and Zhang, Qi and Huang, Xuanjing and Peng, Jiajie and Wei, Zhongyu},
title = "{A Benchmark for Automatic Medical Consultation System: Frameworks, Tasks and Datasets}",
journal = {Bioinformatics},
year = {2022},
month = {12},
abstract = "{In recent years, interest has arisen in using machine learning to improve the efficiency of automatic medical consultation and enhance patient experience. In this article, we propose two frameworks to support automatic medical consultation, namely doctor-patient dialogue understanding and task-oriented interaction. We create a new large medical dialogue dataset with multi-level fine-grained annotations and establish five independent tasks, including named entity recognition, dialogue act classification, symptom label inference, medical report generation and diagnosis-oriented dialogue policy.We report a set of benchmark results for each task, which shows the usability of the dataset and sets a baseline for future studies.Both code and data is available from https://github.com/lemuria-wchen/imcs21.Supplementary data are available at Bioinformatics online.}",
issn = {1367-4803},
doi = {10.1093/bioinformatics/btac817},
url = {https://doi.org/10.1093/bioinformatics/btac817},
note = {btac817},
eprint = {https://academic.oup.com/bioinformatics/advance-article-pdf/doi/10.1093/bioinformatics/btac817/48290490/btac817.pdf},
}