Ravnoor Gill edited this page Aug 24, 2017 · 24 revisions

Deep Brainhack 2017 Projects

Segmentation Projects

PET Brain Mask segmentation

Build and train a convolution neural net to automatically segment brain tissue for PET images. For more info, check out:

EEG Seizure Detection and Prediction

Using EEG recordings from patients with epilepsy, detect whether a seizure is currently occurring. The data was obtained from 22 patients over several hours, across 23 separate electrodes. A label of 0 indicates interictal periods (non-seizure), and a label of 1 indicates a seizure is occurring at that time. Possible targets are listed below.

  • Detect whether a seizure is occurring in a given input (binary target).
  • Detect when a seizure is happening in a given input.
  • Predict whether a seizure will happen within 60 minutes of the current sample. (note: labels will need some minor processing).
  • Predict seizures in patients whose data has not been seen (leave-one-patient-out cross-validation).

The data will be available on-site and on ElementAI's machines, and is about 38GB.

HCP Brain Mask Segmentation

Using the Human Connectome Project structural T1 T2 scans, learn to produce full-resolution brain masks. A label of 0 indicates 'not brain' and a label of 1 indicates 'brain'. Possible targets are listed below.

  • Given co-registered T1/T2 scans, predict the brain mask.
  • Given either a T1 or T2 scan, predict the brain mask.

The data will be available on-site and on ElementAI's machines. Use of the data requires you to sign up for HCP. The size of the data is 74GB for T1, 74GB for T2, and 300MB for the brain masks.

Healthy Brain Tissue Segmentation

Train automatic segmentation for healthy brain tissues in T1 images. Data generously provided by Neuromorphometrics. This data was used for the MICCAI 2012 Grand Challenge on Multi-Atlas Labeling

Automatic Quality Control of ABIDE images

The Stanford Centre for Reproducible Neuroscience has been working on quality control of MRI images using an automatic pipeline that computes 64 image quality metrics and uses them to train an automatic classifier, but have not been able to generalize it to new sites with different MRI parameters. Read their pre-print here:

The code for the mriqc is here: They would like us to try to learn their QC labels and see if deep learning can generalize better than their random forest/SVM's trained on imaging metrics

Here's a writeup by Carolina Makowski about one possible quality control protocol for manual labeling:

Oddball detection on fMRI and EEG data

Imputation of missing modalities

Using voxel-based 3D GAN, train a generator/discriminator convolutional neural network to generate synthetic FLAIR/T1 from T1/FLAIR to enable the use of complete datasets in instances of missing modalities.

Time-lapse tracing of axon arbours from in vivo 2-photon images

Train a neural network to reconstruct dynamically growing axonal arbours collected using 3D 2-photon microscopy of individually labeled neurons in living animals with a database of manual reconstructions. The primary objective will be to track changes in individual branches over time.

Automated detection of brain lesions causing epilepsy

The ultimate clinical problem is the detection of brain lesions causing epilepsy on MRI scans, particularly in those with visually normal appearing scans. Multimodal MRI data on healthy controls and patients with visible lesions (manually delineated) will be available on ElementAI machines. The aim is to detect the abnormality on a given set of images on a patient in comparison to a database of controls.

Clone this wiki locally
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
Press h to open a hovercard with more details.