Home

Awesome

A Pytorch Implementation: Multimodal Recurrent Model with Attention for Automated Radiology Report Generation

This repository reimplements the recurrent-conv model in 2018 MICCAI paper: Multimodal Recurrent Model with Attention for Automated Radiology Report Generation [1].

The source code is licensed under CC BY 4.0 license. The contents of this repository are released under an MIT license.

Dependencies

The required Python packages are listed in requirements.txt

Data Download

Download Indiana University Chest X-Ray dataset [2] : Original source, it is under the license.

I selected all frontal images both with impression and findings: Download link (646 MB),

After downloading, please unzip it into "IUdata" folder.

Train

First, generate .json and .pkl data in "IUdata" folder (I have done it)

Second, start train!

Test

Before testing, please only keep one set of weights in the "model_weights" folder, e.g., 1-finding_decoder-9.ckpt, 1-image_encoder-9.ckpt, 1-impression_decoder-9.ckpt. Only three .ckpt files are allowed in model_weights folder.

Run $ python tester.py

Results

Quantitative Results

BLEU_1BLEU_2BLEU_3BLEU_4METEORROUGE
OrignalPaper-Recurrent- Conv [1] </sup>0.4160.2980.2170.1630.2270.309
Ours-Recurrent- Conv </sup>0.4440.3150.2240.1620.1890.364

Qualitative Results

Qualitative Results

Citation

If you use codes in this repository, please cite this github website address.

References

[1] Xue, Y., Xu, T., Long, L.R., Xue, Z., Antani, S., Thoma, G.R., Huang, X.: Multimodal recurrent model with attention for automated radiology report generation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp. 457–466. Springer (2018)

[2] Demner-Fushman, D., Kohli, M.D., Rosenman, M.B., Shooshan, S.E., Rodriguez, L., Antani, S., Thoma, G.R., McDonald, C.J.: Preparing a collection of radiology examinations for distribution and retrieval. J. Am. Med. Inform. Assoc. 23(2), 304–310 (2015)