Awesome
MIDA Multi-Modality Assistant Prototype
<img src="https://github.com/mida-project/prototype-multi-modality-assistant/blob/master/assets/banner.png?raw=true"/>With this repository, researchers can explore the potential of Artificial Intelligence (AI) assistance that is seamlessly integrated into the medical imaging workflow. This solution also enables researchers to easily integrate their own AI algorithms and connect the algorithms with clinicians. Our developed AI assistant, provides a proposed classification to clinicians for the breast cancer diagnosis of patients.
Messages of the assistant can be configurated and found inside the src/common/messages/
folder so that the assistant can adapt the communication per each case. For a conceptual use of our AI assistant, we used the DenseNet model for patient classification. In this piece of information, we also provide the used ai-classifier-densenet161
repository. Results of the AI algorithms can be inserted in the src/common/outputs/
folder so that the assistant could ask clinicians if the AI model was right (or not) while classifying (i.e., providing the BI-RADS result) the patient.
The assistant provides three main functionalities: (1) Accept, if the clinician agrees with the AI model result; (2) Reject, in this case the clinician will reject the result coming from the AI model and can provide the new BI-RADS so that the model could re-train; and (3) Heatmaps, where the assistant provides information concerning the heat intensity and length of the deliniation levels. Both Accept and Reject functionalities will trigger to update the files of the src/common/outputs/
folder. For the Heatmaps, we used the prototype-heatmap
repository to show clinicians the important regions that the model considered as abnormalities.
The prototype was developed with CornerstoneJS technologies, a complete web based platform for medical imaging. Furthermore, this prototype and repository was firstly developed and evaluated for the purpose of the User Tests and Analysis 7 (UTA7) study under our research work. Several Datasets are fostering this UTA7 evaluation, therefore, it is important to also address it here.
For the UTA7 study and tasks, the dataset-uta7-dicom
repository has a sampling of medical images that was used under this study. Next, a group of radiologists annotated these medical images and the dataset can be found inside the dataset-uta7-annotations
repository. Finally, we computed and generated the heatmaps which can be found inside the dataset-uta7-heatmaps
repository. We also used for comparison, both prototype-multi-modality
and prototype-heatmap
repositories. Hence, the prototype-multi-modality-assistant
repository should be paired with these two repositories as we did in our research work. A working demo of the UTA7 study can be found here, but first follow the CORS instructions inside this piece of information.
MIMBCD-UI is a research work that deals with the use of a recently proposed technique in literature: Deep Convolutional Neural Networks (CNNs). These deep networks will incorporate information from several different modes and integrated inside a User Interface (UI), so that clinicians can interact with the deep neural networks. The UI was implemented based on our Prototype Breast Screening repository. The hereby repository is a mirror of our Prototype Breast Screening repository which is an Open Source solution with the goal to deliver an example of web based medical imaging platform for the breast cancer diagnosis. We also have several demos to see in our MIMBCD-UI YouTube Channel and BreastScreening YouTube Channel, please follow us. For a proper video demonstration of this repository during the UTA7 study, please follow this YouTube Playlist as an example.
Citing
We kindly ask scientific works and studies that make use of the repository to cite it in their associated publications. Similarly, we ask open-source and closed-source works that make use of the repository to warn us about this use.
You can cite our work using the following BibTeX entry:
@article{CALISTO2021102607,
title = {Introduction of Human-Centric AI Assistant to Aid Radiologists for Multimodal Breast Image Classification},
journal = {International Journal of Human-Computer Studies},
pages = {102607},
year = {2021},
issn = {1071-5819},
doi = {https://doi.org/10.1016/j.ijhcs.2021.102607},
url = {https://www.sciencedirect.com/science/article/pii/S1071581921000252},
author = {Francisco Maria Calisto and Carlos Santiago and Nuno Nunes and Jacinto C. Nascimento},
keywords = {Human-Computer Interaction, Artificial Intelligence, Healthcare, Medical Imaging, Breast Cancer},
abstract = {In this research, we take an HCI perspective on the opportunities provided by AI techniques in medical imaging, focusing on workflow efficiency and quality, preventing errors and variability of diagnosis in Breast Cancer. Starting from a holistic understanding of the clinical context, we developed BreastScreening to support Multimodality and integrate AI techniques (using a deep neural network to support automatic and reliable classification) in the medical diagnosis workflow. This was assessed by using a significant number of clinical settings and radiologists. Here we present: i) user study findings of 45 physicians comprising nine clinical institutions; ii) list of design recommendations for visualization to support breast screening radiomics; iii) evaluation results of a proof-of-concept BreastScreening prototype for two conditions Current (without AI assistant) and AI-Assisted; and iv) evidence from the impact of a Multimodality and AI-Assisted strategy in diagnosing and severity classification of lesions. The above strategies will allow us to conclude about the behaviour of clinicians when an AI module is present in a diagnostic system. This behaviour will have a direct impact in the clinicians workflow that is thoroughly addressed herein. Our results show a high level of acceptance of AI techniques from radiologists and point to a significant reduction of cognitive workload and improvement in diagnosis execution.}
}
Table of contents
Prerequisites
The following list is showing the required dependencies for this project to run locally:
- Git or any other Git or GitHub version control tool
- NodeJS (v10.15.3 or newer)
- npm (6.14.4 or newer)
Here are some tutorials and documentation, if needed, to feel more comfortable about using and playing around with this repository:
Usage
Usage follow the instructions here to setup the current repository and extract the present data. To understand how the hereby repository is used for, read the following steps.
Instructions
First of all, you will need NodeJS installed locally on your machine. This project needs both npm
and http-server
dependencies to install and run the core project. If you do not have those installed please follow the INSTALL
instructions.
DICOM Server
The following assumes you will be using a git version control for this repository, storing thanks to GitHub. First, download and install a git distribution. Our system needs to be integrated with WADO-URI servers, DICOMWeb servers or any HTTP based server that returns a DICOM P10 instances. We suggest you to use an Orthanc server, since it is a simple and powerful standalone DICOM server by providing a RESTful API.
Download
1.1.1. Download the DICOM Server by following the next instruction:
You can download a latest version or you can use our own sample of an Orthanc version with our examples of patient images. The instructions to use our solution are as follows.
1.1.2. Follow the Orthanc Documentation to properly configure your server;
1.1.3. Memorize the configured <port>
number of the Orthanc which will be important for the configurations section;
1.1.4. You will need to populate the Orthanc server with your own medical images, or you can use our sample from the dataset-uta7-dicom
repository;
Main Server
Our main server uses NodeJS and has several dependencies. For the following steps you must have already installed both NodeJS and npm
in your machine.
Clone
2.1.1. Clone the project repository:
git clone git@github.com:mida-project/prototype-multi-modality-assistant.git
2.1.2. Now, you will need to populate both src/common/messages/
and src/common/outputs/
folders with your own dataset of classifications or you can clone our example:
git clone git@github.com:MIMBCD-UI/dataset-uta7-ai.git
2.1.3. Go inside the project folder:
cd prototype-multi-modality-assistant/
2.1.4. Next, run our script by doing:
./scripts/filler.sh
Configurations
2.2.1. Go inside the config/
folder:
cd config/
2.2.2. Copy the sample version of the env
file to the new one:
cp sample-env.json env.json
2.2.3. Copy the sample version of the local
file to the new one:
cp sample-local.json local.json
2.2.4. Change the <port>
number of this new local.json
file for the one configured in the
Install
2.3.1. Install the local dependencies:
npm install
2.3.2. You can now Run the project, just follow the next section.
Run
2.4.1. Inside the project folder:
cd prototype-multi-modality-assistant/
2.4.2. If you have already run the DICOM Server on a previous section, please jump to the 2.3.3. point, otherwise do:
./Orthanc
2.4.3. Run the code:
npm run build:multi
2.4.4. Start the project:
npm run start:multi
2.4.5. Open the link:
localhost:8286/src/public/index.html
Allow-Control-Allow-Origin
Access-Control-Allow-Origin is a CORS (Cross-Origin Resource Sharing) header. If you want to know How does Access-Control-Allow-Origin header work? follow the link.
Google Chrome
- To deal with the CORS issue it is necessary to open Google Chrome with
--disable-web-security
flag on:
open /Applications/Google\ Chrome.app --args --disable-web-security --user-data-dir
- Or install the CORS plugin for Google Chrome.
Roadmap
We need to follow the repository goal, by addressing the thereby information. Therefore, it is of chief importance to scale this solution supported by the repository. The repository solution follows the best practices, achieving the Core Infrastructure Initiative (CII) specifications.
Besides that, one of our goals involves creating a configuration file to automatically test and publish our code to pip or any other package manager. It will be most likely prepared for the GitHub Actions. Other goals may be written here in the future.
Contributing
This project exists thanks to all the people who contribute. We welcome everyone who wants to help us improve this repository. As follows, we present some suggestions.
Issuer
Either as something that seems missing or any need for support, just open a new issue. Regardless of being a simple request or a fully-structured feature, we will do our best to understand them and, eventually, solve them.
Developer
We like to develop, but we also like collaboration. You could ask us to add some features... Or you could want to do it yourself and fork this repository. Maybe even do some side-project of your own. If the latter ones, please let us share some insights about what we currently have.
Information
The current information will summarize important items of this repository. In this section, we address all fundamental items that were crucial to the current information.
Related Repositories
The following list, represents the set of related repositories for the presented one:
Dataset Resources
To publish our datasets we used a well known platform called Kaggle. To access our project's Profile Page just follow the link. Last but not least, you can also follow our work at data.world, figshare.com and openml.org platforms.
About
For more information about the MIMBCD-UI research work just follow the link. Pieces of information about details of this repository are also in a wiki. This prototype was developed using several libraries and dependencies. Despite that all libraries had their importance and supported the development, one of it was of chief importance. The CornerstoneJS library and secondary libraries, respectively, are supporting this prototype. We Acknowledge all people involved in the path.
License
Copyright © 2018 Instituto Superior Técnico (IST)
The prototype-multi-modality-assistant
repository is distributed under the terms of both Academic License and Commercial License, for academic and commercial purpose, respectively. For more information regarding the License of the hereby repository, just follow both ACADEMIC and COMMERCIAL files.
Intellectual Property
The content of the present repository has obtained the patent right of World Intellectual Property Organization (WIPO) invention. Moreover, the hereby invention of the prototype-multi-modality-assistant
repository is under protection of the patent number WO2022071818A1 with the application number PCT/PT2021/050029. The title of the invention is "Computational Method and System for Improved Identification of Breast Lesions", registered under the WO patent office.
Team
Our team brings everything together sharing ideas and the same purpose, developing even better work. In this section, we will nominate the full list of important people for this repository, as well as respective links.
Authors
-
Francisco Maria Calisto [ Website | ResearchGate | GitHub | Twitter | LinkedIn ]
-
Carlos Santiago [ ResearchGate ]
-
Nuno Nunes [ ResearchGate ]
-
Jacinto Nascimento [ ResearchGate ]
Revisors
-
Hugo Lencastre [ ResearchGate ]
-
Nádia Mourão [ ResearchGate ]
Companions
- Alfredo Ferreira
- Bruno Cardoso
- Bruno Dias
- Bruno Oliveira
- Catarina Barata
- Daniel Gonçalves
- João Bernardo Tavares
- Luís Ribeiro Gomes
- Madalena Pedreira
- Pedro Miraldo
Acknowledgments
This work was partially supported by national funds through FCT and IST through the UID/EEA/50009/2013 project, BL89/2017-IST-ID grant. We thank Dr. Clara Aleluia and her radiology team of HFF for valuable insights and helping using the Assistant on their daily basis. From IPO-Lisboa, we would like to thank the medical imaging teams of Dr. José Carlos Marques and Dr. José Venâncio. From IPO-Coimbra, we would like to thank the radiology department director and the all team of Dr. Idílio Gomes. Also, we would like to provide our acknowledgments to Dr. Emília Vieira and Dr. Cátia Pedro from Hospital Santa Maria. Furthermore, we want to thank all team from the radiology department of HB for participation. Last but not least, a great thanks to Dr. Cristina Ribeiro da Fonseca, who among others is giving us crucial information for the BreastScreening project.
A special thanks to Chris Hafey, the propelling person of CornerstoneJS, who also developed the cornerstoneDemo. Not forgetting the three supporters of the CornerstoneJS library, Aloïs Dreyfus, Danny Brown and Erik Ziegler. We also would like to give a special thanks to Erik Ziegler who support several issues during this path.
List of important people to acknowledgment:
Supporting
Our organization is a non-profit organization. However, we have many needs across our activity. From infrastructure to service needs, we need some time and contribution, as well as help, to support our team and projects.
<span> <a href="https://opencollective.com/oppr" target="_blank"> <img src="https://opencollective.com/oppr/tiers/backer.svg" width="220"> </a> </span>Contributors
This project exists thanks to all the people who contribute. [Contribute].
<span class="image"> <a href="graphs/contributors"> <img src="https://opencollective.com/oppr/contributors.svg?width=890" /> </a> </span>Backers
Thank you to all our backers! 🙏 [Become a backer]
<span> <a href="https://opencollective.com/oppr#backers" target="_blank"> <img src="https://opencollective.com/oppr/backers.svg?width=890"> </a> </span>Sponsors
Support this project by becoming a sponsor. Your logo will show up here with a link to your website. [Become a sponsor]
<span> <a href="https://opencollective.com/oppr/sponsor/0/website" target="_blank"> <img src="https://opencollective.com/oppr/sponsor/0/avatar.svg"> </a> </span> <br /> <span> <a href="http://www.fct.pt/" title="FCT" target="_blank"> <img src="https://github.com/mida-project/meta/blob/master/brands/fct_footer.png?raw=true" alt="fct" width="20%" /> </a> </span> <span> <a href="https://www.fccn.pt/en/" title="FCCN" target="_blank"> <img src="https://github.com/mida-project/meta/blob/master/brands/fccn_footer.png?raw=true" alt="fccn" width="20%" /> </a> </span> <span> <a href="https://www.ulisboa.pt/en/" title="ULisboa" target="_blank"> <img src="https://github.com/mida-project/meta/blob/master/brands/ulisboa_footer.png?raw=true" alt="ulisboa" width="20%" /> </a> </span> <span> <a href="http://tecnico.ulisboa.pt/" title="IST" target="_blank"> <img src="https://github.com/mida-project/meta/blob/master/brands/ist_footer.png?raw=true" alt="ist" width="20%" /> </a> </span> <span> <a href="http://hff.min-saude.pt/" title="HFF" target="_blank"> <img src="https://github.com/mida-project/meta/blob/master/brands/hff_footer.png?raw=true" alt="hff" width="20%" /> </a> </span>