Skip to content

Data and Notebooks to reproduce the results of the paper by Graziani and Palatnik et al. (2021)

License

Notifications You must be signed in to change notification settings

maragraziani/sharp-LIME

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Contributors Forks Stargazers Issues MIT License LinkedIn


Logo

Sharp-LIME: Sharpening Local Interpretable Model-agnostic Explanations for Histopathology

Applying off-the-shelf methods with default configurations such as Local Interpretable Model-Agnostic Explanations (LIME) [1] is not sufficient to generate stable and understandable explanations in histopathology [3]. This work improves standard LIME by leveraging nuclei annotations, creating a reliable way for pathologists to audit black-box tumor classifiers. The obtained visualizations reveal the sharp, neat and high attention of the deep classifier to the neoplastic nuclei in the dataset, an observation in line with clinical decision making. Compared to standard LIME, our explanations show improved understandability for domain-experts, report higher stability and pass the sanity checks of consistency to data or initialization changes and sensitivity to network parameters.
Explore the docs »

View Examples · Report Bug ·

Table of Contents

  1. About The Project
  2. Getting Started
  3. Usage
  4. License
  5. Contact
  6. Acknowledgements

About The Paper

We propose a methodology to improve the reliability and explainability of LIME for histopathology. Our main observation is that the unsupervised segmentation method used in standard LIME is not optimal to identify superpixels in pathology images.
We improves tis approach by selecting regions in the image that have a semantic meaning, being either nuclei or portions of the background. This is obtained by exploiting the manual contours of nuclei in PanNuke breast images [4] and by using a Mask-RCNN to obtain segmentations of unlabelled nuclei in Camelyon [5]. To balance the foreground to background ratio, we divide the background tissue into nine blocks of small size and compare the LIME weights for these blocks against the nuclei.

Based Upon

  • 1 Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. "" Why should i trust you?" Explaining the predictions of any classifier." Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016.
  • 2 Palatnik de Sousa, Iam, Marley Maria Bernardes Rebuzzi Vellasco, and Eduardo Costa da Silva. "Local interpretable model-agnostic explanations for classification of lymph node metastases." Sensors 19.13 (2019): 2969.
  • 3 Graziani, Mara, et al. "Evaluation and Comparison of CNN Visual Explanations for Histopathology." (2020).
  • 4 Gamper, Jevgenij, et al. "Pannuke: an open pan-cancer histology dataset for nuclei instance segmentation and classification." European Congress on Digital Pathology. Springer, Cham, 2019.
  • 5 Litjens, Geert, et al. "1399 H&E-stained sentinel lymph node sections of breast cancer patients: the CAMELYON dataset." GigaScience 7.6 (2018): giy065.

Getting Started

To get a local copy up and running follow these simple steps.

Prerequisites

This code was developed in Python 3.6 and using Tensorflow 2. You will also need some standard packages to replicate the experiments.Follow the instructions in Installation to set the environment

Installation

  1. Clone the repo
    git clone https://github.com/maragraziani/MICCAI2021_replicate
  2. Install python packages with pip
    pip install numpy pandas matplotlib h5py seaborn scikit-image 
    pip install git+https://github.com/palatos/lime@ColorExperiments

Usage

Use this space to show useful examples of how a project can be used. Additional screenshots, code examples and demos work well in this space. You may also link to more resources.

For more examples, please refer to the Notebooks folder

License

Distributed under the MIT License. See LICENSE for more information.

Contact

Mara Graziani - @mormontre - mara.graziani@hevs.ch Iam Palatnik - iam.palat@gmail.com

Cite our work

If you make use of the code, please cite our paper in your work

@article{graziani2021sharpening,
title = "Sharpening Local Interpretable Model-agnostic Explanations for Histopathology: Improved Understandability and Reliability",
journal = "to be presented at MICCAI2021",
pages = "",
year = "2021",
issn = "",
doi = "",
author = "Mara Graziani and Iam Palatnik De Sousa and Marley M.B.R. Vellasco and Eduardo Costa da Silva and Henning Mueller and Vincent Andrearczyk"
}

About

Data and Notebooks to reproduce the results of the paper by Graziani and Palatnik et al. (2021)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 100.0%