Skip to content

Publications

Francisco Maria Calisto edited this page Mar 31, 2022 · 37 revisions

We are concerned about communicating our progress across the scientific community. Therefore, when we achieve a potential contribution, we start to communicating our findings among a peer-review process. Mainly, our list of publications is updated on ResearchGate and disseminated in this platform. Usually, our publications are published in top journals and conferences for the addressed fields of research. From scientific fields such as Human-Computer Interaction, Health Informatics, Artificial Intelligence to Cancer Research, or even Medical Research, we follow the best publishers where to submit our work. Nevertheless, some of the presented information is extended for both MIDA and BreastScreening publications.

Index

Related Publications

The following list of items shows a list of Related Publications from our research work. All works are Co-Authored by our team and, therefore, we want to spread the word among the scientific community.

Scientific Papers


Francisco Maria Calisto, Carlos Santiago, Nuno Nunes, Jacinto C. Nascimento,

BreastScreening-AI: Evaluating Medical Intelligent Agents for Human-AI Interactions,

Artificial Intelligence in Medicine,

Volume 127, 2022, 102285, ISSN 0933-3657,

https://doi.org/10.1016/j.artmed.2022.102285.

(https://www.sciencedirect.com/science/article/pii/S0933365722000501)


Francisco Maria Calisto, Carlos Santiago, Nuno Nunes, Jacinto C. Nascimento,

Introduction of Human-Centric AI Assistant to Aid Radiologists for Multimodal Breast Image Classification,

International Journal of Human-Computer Studies,

Volume 150, 2021, 102607, ISSN 1071-5819,

https://doi.org/10.1016/j.ijhcs.2021.102607.

(https://www.sciencedirect.com/science/article/pii/S1071581921000252)


Francisco Maria Calisto, Nuno Nunes, and Jacinto C. Nascimento. 2020. BreastScreening: On the Use of Multi-Modality in Medical Imaging Diagnosis. In Proceedings of the International Conference on Advanced Visual Interfaces (AVI '20). Association for Computing Machinery, New York, NY, USA, Article 49, 1–5. DOI: 10.1145/3399715.3399744


Towards Touch-Based Medical Image Diagnosis Annotation

Francisco M. Calisto, Alfredo Ferreira, Jacinto C. Nascimento and Daniel Gonçalves

In Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces (ISS '17)

ACM, New York, NY, USA, 390-395.

DOI: 10.1145/3132272.3134111


Automated Analysis of Unregistered Multi-View Mammograms With Deep Learning

Gustavo Carneiro, Jacinto Nascimento and Andrew P. Bradley

Published in: IEEE Transactions on Medical Imaging ( Volume: 36, Issue: 11, Nov. 2017 )

Page(s): 2355 - 2365

Date of Publication: 12 September 2017

DOI: 10.1109/TMI.2017.2751523


Deep Reinforcement Learning for Active Breast Lesion Detection from DCE-MRI

Gabriel Maicas, Gustavo Carneiro, Andrew P. Bradley, Jacinto C. Nascimento and Ian Reid

MICCAI 2017: Medical Image Computing and Computer-Assisted Intervention − MICCAI 2017 pp 665-673

DOI: 10.1007/978-3-319-66179-7_76


Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support

M. Jorge CardosoTal ArbelGustavo CarneiroTanveer Syeda-MahmoodJoão Manuel R.S. TavaresMehdi MoradiAndrew BradleyHayit GreenspanJoão Paulo PapaAnant MadabhushiJacinto C. NascimentoJaime S. CardosoVasileios BelagiannisZhi Lu

September 7, 2017

Springer

ISBN: 9783319675589


Chapter 14 – Deep Learning Models for Classifying Mammogram Exams Containing Unregistered Multi-View Images and Segmentation Maps of Lesions

Gustavo Carneiro, Jacinto Nascimento, Andrew P. Bradley

Deep Learning for Medical Image Analysis

2017, Pages 321–339

DOI: 10.1016/B978-0-12-810408-8.00019-5


Master Thesis

As our work is inserted into the academic roots of the scientific community, we invite some of our students to work with us. For that purpose, we apply several Master Thesis (MSc) topics to have some of these students working with us on this project. The following list of items shows some of the developed work.


Master Project: 2D Breast Cancer Diagnosis Explainable Visualizations

Master Project Report to obtain a Master Degree in Information Systems and Computer Engineering by the Department of Computer Science and Engineering at Instituto Superior Técnico - University of Lisbon

Nádia Mourão

DOI: 10.13140/RG.2.2.31605.93928/3

GitHub: https://github.com/MIMBCD-UI/master-xai-vis


Master Project: Breast Cancer Multimodality Scalable Interactions

Master Project Report to obtain a Master Degree in Information Systems and Computer Engineering by the Department of Computer Science and Engineering at Instituto Superior Técnico - University of Lisbon

Hugo Lencastre

DOI: 10.13140/RG.2.2.35800.24329/4

GitHub: https://github.com/MIMBCD-UI/ScalableInteractions


Master Dissertation: Medical Imaging Multimodality Breast Cancer Diagnosis User Interface

Master Dissertation to obtain a Master Degree in Information Systems and Computer Engineering by the Department of Computer Science and Engineering at Instituto Superior Técnico - University of Lisbon

Francisco Maria Calisto

DOI: 10.13140/RG.2.2.15187.02084

GitHub: https://github.com/MIMBCD-UI/master-dissertation


Readings

Working in scientific research is an exciting and challenging field. New techniques and tools are constantly percolating and honestly, it can feel overwhelming. Many of these new developments are found and first revealed in academic research articles. In this section, we are going to address several important Readings of our work.

High-Quality Important Human-Computer Interaction Papers

In our work, we follow a set of important Human-Computer Interaction (HCI) literature. While we aim to develop several Medical Imaging (MI) systems and tools, those will be used by real-world users. Our collaborative clinicians. Because of that, we need to apply several HCI methods to achieve a higher and more robust UI solution. As follows, we will describe each research item that we really felt it was important for our work as HCI literature and with high quality.

The first important work that we follow, was the work done by Sultanum et al. [1], which provides us the first insights of both HCI and the Health Informatics (HI) fields. In this work, the authors present their investigation into the role and use of text in clinical practice, and reports on efforts to assess the best of both worlds - text, and visualization - to facilitate the clinical overview. Their results led to a number of grounded design recommendations to guide visualization design to support clinical text overview.

Another work important work was the one developed by Kocielnik et al. [2], where the authors show that the different focus on the types of errors to avoid (avoiding False-Positives vs False-Negatives) can lead a vastly different subjective perception of accuracy and acceptance. Furthermore, the authors designed an expectation adjustment technique that prepares users for AI imperfections and results in a significant increase in acceptance.

In the same mindset (and almost the same team) of the last work [2], Amershi et al. [3] developed a set of guidelines that will guide us during our work. Their results verify the relevance of the guidelines over a spectrum of interaction scenarios and reveal gaps, highlighting opportunities for our research. Based on the evaluations, a set of design guidelines can serve as a resource to us, as practitioners, working on the design of our clinical applications and features that harness AI, and for our further interest in the development of Human-AI Interaction (HAII) designs.

On a similar purpose, Cai et al. [4] identified the needs of clinicians when searching for similar images retrieved using a Deep Learning (DL) algorithm, and developed tools that empower users to cope with the search algorithm on-the-fly, communicating what types of similarity are most important at different moments in time. In two evaluations with clinicians, authors found that these tools increased the diagnostic utility of images found and increased user trust in the algorithm. The authors also observed that users adopted new strategies when using refinement tools, re-purposing them to test and understand the underlying algorithm and to disambiguate Machine Learning (ML) errors from their own errors. Taken together, their findings inform future HAII collaborative systems for expert decision-making. Similarly, on another work [7], also investigated the key types of information medical experts desire when they are first introduced to a diagnostic AI assistant. In a qualitative lab study, the authors interviewed several clinicians before, during, and after being presented to a Deep Neural Network (DNN) prediction for prostate cancer diagnosis, to learn the types of information that they desired about the AI assistant. Furthermore, another similar work [12] proposes and evaluates two kinds of example-based explanations in the visual domain, normative explanations and comparative explanations, which automatically surface examples from the training set of a DNNs sketch-recognition algorithm.

Wang et al. [5] propose a conceptual framework for building a Human-Centered, decision-theory-driven of XAI based on an extensive review across these fields. Drawing on their perceived framework, the authors identified pathways along which human cognitive patterns drive needs for building XAI and how XAI can mitigate common cognitive biases. In the end, the authors discuss implications for XAI design and development.

According to the decision-making process, Yang et al. [6] developed a new form of clinical Decision Support Tool (DST). This tool automatically generates slides for clinicians' decision meetings with subtly embedded machine prognostics. Their design took inspiration from the notion of Unremarkable Computing, that by augmenting the users' routines technology/AI can have significant importance for the users yet remain unobtrusive. Correspondingly, in another paper, Yang et al. [13] describe a field study investigating how clinicians make a heart pump implant decision with a focus on how to best integrate an intelligent DST into their work process.

Yin et al. [8] address a relatively under-explored aspect of HCI: people's abilities to understand the relationship between an ML model's stated performance on held-out data and its expected performance post-deployment. The authors conducted a large-scale, randomized human-subject experiment to examine whether laypeople's trust in a model. The authors did that by measuring it in terms of both frequencies and their self-reported levels of trust in the model. The model varies depending on the stated accuracy in held-out data, as well as on its observed accuracy in practice.

Other authors [9] developed a system that uses graph-related tools with local effects and introduces two novel tools, dedicated to solving two common problems arising when automatically extracting the centerlines of vascular structures: so-called "Kissing Vessels" and a type of phenomenon called "Dotted Vessels". Their work introduces an interactive vascular network reconstruction system called INVANER that relies on a graph-like representation of the network's structure.

Three novel image techniques were presented by Forlines et al. [10], where the authors designed a system to improve visual search. Their techniques rely on the images being broken into image segments, which are then recombined or displayed in novel ways. The techniques and their underlying design reasoning are described in detail, and three experiments are presented that provide initial evidence that those techniques lead to better search performance in a simulated cell slide pathology task. Not the same authors, but others, Ruddle et al. [14] describe the design and evaluation of two generations of an interface for navigating datasets of gigapixel images that pathologists use to diagnose cancer.

Last but not least, Ocegueda-Hernández et al. [11] present advances towards the development of computational methods for the natural and intuitive visualization of volumetric medical data [15]. In their work, the authors proposed several methods enabling any user to explore volumetric medical data without the requirement of any training or instruction. This natural intuitive capability of the proposed system could be an advantage for clinicians who want to explore medical data but are not familiar with the complicated software and systems currently available on the market.

References

[1] Nicole Sultanum, Michael Brudno, Daniel Wigdor, and Fanny Chevalier. 2018. More Text Please! Understanding and Supporting the Use of Visualization for Clinical Text Overview. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). Association for Computing Machinery, New York, NY, USA, Paper 422, 1–13. DOI: https://doi.org/10.1145/3173574.3173996

[2] Rafal Kocielnik, Saleema Amershi, and Paul N. Bennett. 2019. Will You Accept an Imperfect AI? Exploring Designs for Adjusting End-user Expectations of AI Systems. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19). Association for Computing Machinery, New York, NY, USA, Paper 411, 1–14. DOI: https://doi.org/10.1145/3290605.3300641

[3] Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N. Bennett, Kori Inkpen, Jaime Teevan, Ruth Kikin-Gil, and Eric Horvitz. 2019. Guidelines for Human-AI Interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19). Association for Computing Machinery, New York, NY, USA, Paper 3, 1–13. DOI: https://doi.org/10.1145/3290605.3300233

[4] Carrie J. Cai, Emily Reif, Narayan Hegde, Jason Hipp, Been Kim, Daniel Smilkov, Martin Wattenberg, Fernanda Viegas, Greg S. Corrado, Martin C. Stumpe, and Michael Terry. 2019. Human-Centered Tools for Coping with Imperfect Algorithms During Medical Decision-Making. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19). Association for Computing Machinery, New York, NY, USA, Paper 4, 1–14. DOI: https://doi.org/10.1145/3290605.3300234

[5] Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y. Lim. 2019. Designing Theory-Driven User-Centric Explainable AI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19). Association for Computing Machinery, New York, NY, USA, Paper 601, 1–15. DOI: https://doi.org/10.1145/3290605.3300831

[6] Qian Yang, Aaron Steinfeld, and John Zimmerman. 2019. Unremarkable AI: Fitting Intelligent Decision Support into Critical, Clinical Decision-Making Processes. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19). Association for Computing Machinery, New York, NY, USA, Paper 238, 1–11. DOI: https://doi.org/10.1145/3290605.3300468

[7] Carrie J. Cai, Samantha Winter, David Steiner, Lauren Wilcox, and Michael Terry. 2019. “Hello AI”: Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making. Proc. ACM Hum.-Comput. Interact. 3, CSCW, Article 104 (November 2019), 24 pages. DOI: https://doi.org/10.1145/3359206

[8] Ming Yin, Jennifer Wortman Vaughan, and Hanna Wallach. 2019. Understanding the Effect of Accuracy on Trust in Machine Learning Models. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19). Association for Computing Machinery, New York, NY, USA, Paper 279, 1–12. DOI: https://doi.org/10.1145/3290605.3300509

[9] Valentin Z. Nigolian, Takeo Igarashi, and Hirofumi Seo. 2019. INVANER: INteractive VAscular Network Editing and Repair. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (UIST ’19). Association for Computing Machinery, New York, NY, USA, 1197–1209. DOI: https://doi.org/10.1145/3332165.3347900

[10] Clifton Forlines and Ravin Balakrishnan. 2009. Improving visual search with image segmentation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’09). Association for Computing Machinery, New York, NY, USA, 1093–1102. DOI: https://doi.org/10.1145/1518701.1518868

[11] Vladimir Ocegueda-Hernández and Gerardo Mendizabal-Ruiz. 2016. Computational Methods for the Natural and Intuitive Visualization of Volumetric Medical Data. In Companion Publication of the 21st International Conference on Intelligent User Interfaces (IUI ’16 Companion). Association for Computing Machinery, New York, NY, USA, 54–57. DOI: https://doi.org/10.1145/2876456.2879485

[12] Carrie J. Cai, Jonas Jongejan, and Jess Holbrook. 2019. The effects of example-based explanations in a machine learning interface. In Proceedings of the 24th International Conference on Intelligent User Interfaces (IUI ’19). Association for Computing Machinery, New York, NY, USA, 258–262. DOI: https://doi.org/10.1145/3301275.3302289

[13] Qian Yang, John Zimmerman, Aaron Steinfeld, Lisa Carey, and James F. Antaki. 2016. Investigating the Heart Pump Implant Decision Process: Opportunities for Decision Support Tools to Help. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). Association for Computing Machinery, New York, NY, USA, 4477–4488. DOI: https://doi.org/10.1145/2858036.2858373

[14] Roy A. Ruddle, Rhys G. Thomas, Rebecca Randell, Philip Quirke, and Darren Treanor. 2016. The Design and Evaluation of Interfaces for Navigating Gigapixel Images in Digital Pathology. ACM Trans. Comput.-Hum. Interact. 23, 1, Article 5 (January 2016), 29 pages. DOI: https://doi.org/10.1145/2834117

[15] Martins, D.S.C., 2016. Safe storage of medical images in NoSQL databases (Doctoral dissertation).

Important Links

Clone this wiki locally