Skip to content

martindyrba/DeepLearningInteractiveVis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deep Learning Interactive Visualization

This project contains all code to learn a convolutional neural network model to detect Alzheimer's disease and visualize contributing brain regions with high relevance.

Further details on the procedures including samples, image processing, neural network modeling, evaluation, and validation were published in:

Dyrba et al. (2021) Improving 3D convolutional neural network comprehensibility via interactive visualization of relevance maps: evaluation in Alzheimer’s disease. Alzheimer's research & therapy 13. DOI: 10.1186/s13195-021-00924-2.

Screenshot of the InteractiveVis appScreenshot of the InteractiveVis app


Running the interactive visualization

The interactive Bokeh web application InteractiveVis can be used for deriving and inspecting the relevance maps overlaid on the original input images.

To run it, there are three options.

  1. We set up a public web service to quickly try it out: https://explaination.net/demo

  2. Alternatively, download the docker container from DockerHub: sudo docker pull martindyrba/interactivevis Then use the scripts sudo ./run_docker_intvis.sh and sudo ./stop_docker_intvis.sh to run or stop the Bokeh app. (You find both files above in this repository.) After starting the docker container, the app will be available from your web browser: http://localhost:5006/InteractiveVis

  3. Download this Git repository. Install the required Python modules (see below). Then point the Anaconda prompt or terminal console to the DeepLearningInteractiveVis main directory and run the Bokeh app using: bokeh serve InteractiveVis --show

Requirements and installation:

To be able to run the interactive visualization from the Git sources, you will need Python <3.8, in order to install tensorflow==1.15. Also, we recommend to first create a new Python environment (using Anaconda or virtualenv/venv) to avoid messing up your local Python modules/versions when you have other coding projects or a system shared by multiple users.

# for Anaconda:
conda create -n InteractiveVis python=3.7
conda activate InteractiveVis

Run pip to install the dependencies:

pip install -r requirements.txt

Then you can start the Bokeh application:

bokeh serve InteractiveVis --show

CNN model training and performance evaluation

The code for training the CNN models and evaluation is provided in this repository in the subdirectory scripts. The order of script execution was as follows:


InteractiveVis architecture overview

InteractiveVis UML class diagram (v4)

InteractiveVis class diagram (v4)

Select subject UML sequence diagram (v3)

Select subject sequence diagram (v3)


License:

Copyright (c) 2020 Martin Dyrba martin.dyrba@dzne.de, German Center for Neurodegenerative Diseases (DZNE), Rostock, Germany

This project and included source code is published under the MIT license. See LICENSE for details.