Skip to content

dazcona/mrnet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

1 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

A Comparative Study of Existing and New Deep Learning Methods for Detecting Knee Injuries using the MRNet Dataset

Paper presented at The Third International Workshop on Deep and Transfer Learning (DTL2020) as part of International Conference on Intelligent Data Science Technologies and Applications (IDSTA2020).

Please consider citing the following paper if you use any of the work:

@article{azcona2020comparative,
  title={A Comparative Study of Existing and New Deep Learning Methods for Detecting Knee Injuries using the MRNet Dataset},
  author={Azcona, David and McGuinness, Kevin and Smeaton, Alan F},
  journal={arXiv preprint arXiv:2010.01947},
  year={2020}
}

Abstract

This work presents a comparative study of existing and new techniques to detect knee injuries by leveraging Stanford's MRNet Dataset. All approaches are based on deep learning and we explore the comparative performances of transfer learning and a deep residual network trained from scratch. We also exploit some characteristics of Magnetic Resonance Imaging (MRI) data by, for example, using a fixed number of slices or 2D images from each of the axial, coronal and sagittal planes as well as combining the three planes into one multi-plane network. Overall we achieved a performance of 93.4% AUC on the validation data by using the more recent deep learning architectures and data augmentation strategies. More flexible architectures are also proposed that might help with the development and training of models that process MRIs. We found that transfer learning and a carefully tuned data augmentation strategy were the crucial factors in determining best performance.

Dataset

The MRNet dataset consists of knee MRI exams performed at Stanford University Medical Center. Further details can be found at https://stanfordmlgroup.github.io/competitions/mrnet/

  • 1,370 knee MRI exams performed at Stanford University Medical Center
  • 1,104 (80.6%) abnormal exams, with 319 (23.3%) ACL tears and 508 (37.1%) meniscal tears
  • Labels were obtained through manual extraction from clinical reports

Docker Environment

Prerequisites:

  • Docker needs to be installed in your system, see here for getting Docker

Run the following command to build and run the Docker container, and then SSH to it:

$ cd docker
$ docker-compose -f docker-compose.yml up -d --build
$ docker exec -it mrnet_container bash

In the same manner, for usability, we could use the commands on the Makefile:

$ make run
$ make dev

Exploration

  1. Start a Jupyter notebook:
$ docker exec -it mrnet_container bash
$ jupyter notebook --allow-root --ip=0.0.0.0
  1. Convert the Numpy arrays to images as shown in this notebook

  2. start a Python server to visualize the MRIs:

$ python -m SimpleHTTPServer 8000

and then navigate to http://localhost:8000/ to interact with it!

Deployment

In our paper we propose and evaluate the performance of the following architectures to train networks and output the probabilities for a patient to have an ACL tear, meniscal tear, or some abnormality on their knee:

  1. Training a Deep Residual Network with Transfer Learning
  2. Training a Deep Residual Network from Scratch & Use a Fixed Number of Slices
  3. Training a Multi-Plane Deep Residual Network
  4. Training a Multi-Plane Multi-Objective Deep Residual Network

1. Training a Deep Residual Network with Transfer Learning

  1. Select the approach by editing config.py:
APPROACH = 'pretrained'

pretrained uses ImageNet pre-trained weights.

  1. Train a model for each task and for each plane:
$ python src/train_baseline.py -t '<task>' -p '<plane>'

For the pretrained approach we use train_baseline.py.

For instance, for task acl:

$ python src/train_baseline.py -t 'acl' -p 'axial'
$ python src/train_baseline.py -t 'acl' -p 'coronal'
$ python src/train_baseline.py -t 'acl' -p 'sagittal'

and then repeat for tasks meniscus and abnormal.

  1. For each task, combine predictions per plane by training a Logistic Regression model:
$ python src/combine.py -t '<task>'

For instance, for task acl:

$ python src/combine.py -t 'acl'

and then repeat for tasks meniscus and abnormal.

The models with the greatest validation AUC are picked per plane

  1. Generate predictions for each patient in the sample test set for all tasks: acl, meniscus and abnormal:
$ python src/predict.py

2. Training a Deep Residual Network from Scratch & Use a Fixed Number of Slices

  1. Select the approach by editing config.py:
APPROACH = 'slices'

slices uses a fixed number of slices to train a network from scratch with random initialization of the weights

  1. Train a model for each task and for each plane:
$ python src/train_slices.py -t '<task>' -p '<plane>'
  1. For each task, combine predictions per plane by training a Logistic Regression model:
$ python src/combine.py -t '<task>'
  1. Generate predictions for each patient in the sample test set for all tasks:
$ python src/predict.py

3. Training a Multi-Plane Deep Residual Network

  1. Select the approach by editing config.py:
APPROACH = 'slices'
  1. Train a model for each task but all the planes together:
$ python src/train_slices_planes.py -t '<task>'
  1. Generate predictions for each patient in the sample test set for all tasks:
$ python src/predict_planes.py

4. Training a Multi-Plane Multi-Objective Deep Residual Network

  1. Select the approach by editing config.py:
APPROACH = 'slices'
  1. Train a model for all task and all planes together:
$ python src/train_slices_planes_tasks.py -t '<task>'
  1. Generate predictions for each patient in the sample test set for all tasks:
$ python src/predict_planes.py

Submission

Submit to CodaLab by zipping the src folder and upload it to my worksheet:

$ sh codalab.sh

Figures & Tables

Data Augmentation Policy. All values correspond to the configuration from 1.

Results for the proposed architectures. Combined is the accuracy of a logistic regression based ensemble.

Percentages of images augmented for each task and plane. All values correspond to the configuration from 1.

Interpolation Examples: Transforming N images into M images. Matrices show the weights to be applied to each original image to interpolate to the newly transformed images

Further work

The following notebooks show how to augment the MR images by using:

Resources

Codalab Submission

Previous Submissions

Tutorials

Discussions

Visualization

Context

Models

Pre-trained models

Data Augmentation

Class Activation Maps

About

😷 MRNet: A Comparative Study of Existing and New Deep Learning Methods for Detecting Knee Injuries using the MRNet Dataset

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published