Home

Awesome

MIMIC-CXR-annotations

Here are 458 annotations used in the disease visual grounding task presented in the publication here: https://arxiv.org/abs/2007.15778. The annotations were inputed by a board-certified radiologist using the COCO web annotation tool here: https://github.com/jsbroks/coco-annotator. Thus the annotations follow the format from the annotator, namely a json with three super fields: images, categories, and annotations. Information on accessing the images and reports themselves can be found here: https://physionet.org/content/mimic-cxr-jpg/2.0.0/.

<p align="center"><img src="https://github.com/leotam/MIMIC-CXR-annotations/blob/master/fig/Screen%20Shot%202020-07-26%20at%209.11.14%20PM.png"></p>

More details are available in the publication below:

@inproceedings{tamliterati2020,
        author = {Tam, L.K., Wang, X., Turkbey, E., Lu, K., Wen, Y., Xu, D.},    
        title = {Weakly supervised one-stage vision and language disease detection using large scale pneumonia and pneumothorax studies},
        booktitle={Medical Image Computing and Computer-Assisted Intervention -- MICCAI 2020},    
        year = {2020},    
        month = {March},    
        location = {NVIDIA, Santa Clara, CA, USA},     
        howpublished = {MICCAI 2020},
}