Skip to content
Francisco Maria Calisto edited this page Jan 23, 2019 · 3 revisions

Welcome to the Wiki of the Multi-Modality Assistant Prototype our first try to develop a proof-of-concept for an AI-Assistive techniques into breast cancer diagnosis within our MIDA Project. The project goal is to propose a new methodology for fully automated breast cancer detection and segmentation from multi-modal medical images, introducing clinical covariates. This project provides the following novelties. First, it lies on the use of Deep Convolutional Neural Networks (CNN) that will incorporate several image modalities: Magnetic Resonance Imaging (MRI), UltraSound (US) or MammoGraphy (MG) views. This is the first methodology that is able to classify a whole exam, containing all the above image modalities. Also, our system is visually providing clinicians, not only the classification results and accuracy of the automation methods but also the explainability of the achieved results and efficiency. The last topic is regarding a popular literature field called eXplainable Artificial Intelligence (XAI), well addressed by DARPA.

Index

Clone this wiki locally