Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
85 changes: 51 additions & 34 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,57 +1,74 @@
# pyLattice_deepLearning

## Getting Started
Patch-Trained 3D U-Nets for Binary Segmentation. 3 Clear Jupyter Notebooks.

These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
![](images/u-net_architecture.png)

### Prerequisites
pyLattice_deepLearning was created to segment puncta in 3D microscopy data. However, our Jupyter Notebooks will walk you through training a 3D U-Net on data of your choice. Since our microscopy data was highly skewed, we developed code to split images into cube patches (x,y,z), discard patches with low or no signal, and train on the remaining patches. For segmentation, our code once again splits the image into cube patches, performs segmentation within each patch, and stitches the patches together for the final output. Therefore, the trained U-Nets can segment images with large arbitrary dimensions (x,y,z), given that these images have the same resolution as the training data.

The things you need to train the CNNs:
* python 3.*
* Anaconda (tested on: conda 4.5.11)
* Jupyter Notebook (usually comes with Anaconda Cloud)
## Getting Started

NOTE: Ensure that the Tensorflow backend in Keras is being used
### Prerequisites
* Anaconda (python 3.*)

### Installing

Steps to setup the environment on a computer

//TODO: Google Cloud

1 . Clone this repository

2 . Install these dependencies:
1. Clone this repository
```
nibabel,
keras,
pytables,
nilearn,
SimpleITK,
nipype
$ git clone https://github.com/JohSchoeneberg/pyLattice_deepLearning
```

3 . In train.py change line 6 to reflect the path of the tensorflow folder on your personal machine

2. Setup a new Conda environment (Windows users need to do this on Anaconda Prompt)
```
sys.path.insert(0, '/home/gautham/bkly/LatticeLightSheet/pyLattice2/src/tensorflow/')
$ cd pyLattice_deepLearning
$ conda create -n pyLattice_3D_env
$ conda activate pyLattice_3D_env
$ conda install pip=19.2
$ pip install -r requirements.txt
$ jupyter-notebook
```

## Running the tests
// TODO: How to recreate model weights on machine/gcloud
## Usage

## Making Inferences
Run our quickstart notebooks! Look for comments in the notebook to guide you as you train the model.

Obtain the "punted_segmentation_model.h5" and "clathrin_data.h5" file and paste both under the clathrin directory.
![](images/raw_mask_prediction.png)

Run the predict.py script
A: Raw Data B: Ground Truth C: Prediction

Look in the autogenerated prediction folder (under clathrin) for the inferences.
### Preprocessing ([quickstart-1GenData.ipynb](https://github.com/JohSchoeneberg/pyLattice_deepLearning/blob/master/src/quickstart-1GenData.ipynb))

You can view these files quickly online at: https://brainbrowser.cbrain.mcgill.ca/volume-viewer
This notebook generates and saves cube patches from the training data you provide.

Load your own volume > NIfTI-1 file (Browse for the file)
Prior to running this notebook, create 2 folders under ```pyLattice_deepLearning/src/```
1. ```pyLattice_deepLearning/src/quickstart-data/```
2. ```pyLattice_deepLearning/src/quickstart-gendata/```

It is recommended to open 3 tabs and load all 3 NIfTI-1 files from one validation folder (in the prediction folder) to see the differences in the raw data, truth, and predictions.
Currently our code supports grayscale images. If you're looking to use RGB images, you'll need to edit how the numpy arrays are handled in [generator.py](https://github.com/JohSchoeneberg/pyLattice_deepLearning/blob/master/src/generator.py), [predict.py](https://github.com/JohSchoeneberg/pyLattice_deepLearning/blob/master/src/predict.py), and [visualize.py](https://github.com/JohSchoeneberg/pyLattice_deepLearning/blob/master/src/visualize.py), in addition to the 3 Jupyter Notebooks.

### Training ([quickstart-2Unet3D.ipynb](https://github.com/JohSchoeneberg/pyLattice_deepLearning/blob/master/src/quickstart-2Unet3D.ipynb))

This notebook trains a 3D U-Net.

### Predicting ([quickstart-3Load_Model.ipynb](https://github.com/JohSchoeneberg/pyLattice_deepLearning/blob/master/src/quickstart-3Load_Model.ipynb))

This notebook loads a 3D U-Net from the weights and exports the prediction.

## Authors

* **Joh Schoeneberg** - *Post Doc* - [Website](https://www.schoeneberglab.org)
* **Gautham Raghupathi** - *High School Intern* - [LinkedIn](https://www.linkedin.com/in/gurugautham/)

## References
If you use our code, please consider citing:
```
@inproceedings{schöneberg_raghupathi,
author={Schöneberg, Johannes and Raghupathi, Gautham},
title={3D Deep Convolutional Neural Networks in Lattice Light-Sheet Data Puncta Segmentation},
booktitle={2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)},
year={2019}
pages={2369–2372}
}
```

## License
[BSD-3-Clause License](https://github.com/JohSchoeneberg/pyLattice_deepLearning/blob/master/LICENSE)
Binary file added images/raw_mask_prediction.PNG
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/u-net_architecture.PNG
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
5 changes: 5 additions & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
jupyter
numpy==1.18
scikit-image==0.16.2
tensorflow==1.15.2
keras==2.2.5
171 changes: 171 additions & 0 deletions src/1GenLargeData.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,171 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"import skimage\n",
"from skimage.util.shape import view_as_blocks\n",
"import os\n",
"import shutil\n",
"import json"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"image_path = \"/home/gauthamar11/pyLattice2/src/tensorflow/quickUnet/dataset/extras/S3P5_488_150mw_560_300mw_Objdz150nm_ch1_CAM1_stack0001_560nm_0005689msec_0090121790msecAbs_000x_000y_003z_0000t_decon.tif\"\n",
"mask_path= \"/home/gauthamar11/pyLattice2/src/tensorflow/quickUnet/dataset/extras/dmask_02.tif\"\n",
"split_directory=\"/home/gauthamar11/pyLattice2/src/tensorflow/quickUnet/genData/\"\n",
"patch_size = 96\n",
"train_split = 1 #Trying to get coverage of whole large dataset frame. Can change once we use more frames of our large data\n",
"\n",
"if \"train\" not in os.listdir(split_directory):\n",
" os.mkdir(split_directory+\"train/\")\n",
"if \"test\" not in os.listdir(split_directory):\n",
" os.mkdir(split_directory+\"test/\")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Image cropped to: 192, 672, 576\n",
"(192, 672, 576)\n",
"(192, 672, 576)\n",
"255\n"
]
}
],
"source": [
"latticeMovieImage = skimage.external.tifffile.imread(image_path)\n",
"latticeMovieMask = skimage.external.tifffile.imread(mask_path)\n",
"offset=np.asarray([0,0,0])\n",
"\n",
"x_extra = latticeMovieImage.shape[0]%patch_size\n",
"x_size = latticeMovieImage.shape[0] - x_extra\n",
"if offset[0] > x_extra:\n",
" print(\"1st dim offset exceeds image dim\")\n",
" offset[0] = 0\n",
" \n",
"y_extra = latticeMovieImage.shape[1]%patch_size\n",
"y_size = latticeMovieImage.shape[1] - y_extra\n",
"if offset[1] > y_extra:\n",
" print(\"2st dim offset exceeds image dim\")\n",
" offset[1] = 0\n",
" \n",
"z_extra = latticeMovieImage.shape[2]%patch_size\n",
"z_size = latticeMovieImage.shape[2] - z_extra\n",
"if offset[2] > z_extra:\n",
" print(\"3rd dim offset exceeds image dim\")\n",
" offset[2] = 0\n",
" \n",
"latticeMovieImage = latticeMovieImage[offset[0]:x_size+offset[0], offset[1]:y_size+offset[1], offset[2]:z_size+offset[2]]\n",
"latticeMovieMask = latticeMovieMask[offset[0]:x_size+offset[0], offset[1]:y_size+offset[1], offset[2]:z_size+offset[2]]\n",
"print(\"Image cropped to: \" + str(x_size) + \", \" + str(y_size) + \", \" + str(z_size))\n",
"\n",
"print(latticeMovieImage.shape)\n",
"print(latticeMovieMask.shape)\n",
"print(np.amax(latticeMovieMask))"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/usr/local/lib/python3.5/dist-packages/skimage/util/shape.py:94: RuntimeWarning: Cannot provide views on a non-contiguous input array without copying.\n",
" warn(RuntimeWarning(\"Cannot provide views on a non-contiguous input \"\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"(84, 96, 96, 96)\n",
"(84, 96, 96, 96)\n"
]
}
],
"source": [
"lattice_patches = view_as_blocks(latticeMovieImage, block_shape=(patch_size, patch_size, patch_size))\n",
"lattice_patches = lattice_patches.reshape(int(x_size/patch_size)*int(y_size/patch_size)*int(z_size/patch_size), patch_size, patch_size, patch_size)\n",
"\n",
"\n",
"mask_patches = view_as_blocks(latticeMovieMask, block_shape=(patch_size, patch_size, patch_size))\n",
"mask_patches = mask_patches.reshape(int(x_size/patch_size)*int(y_size/patch_size)*int(z_size/patch_size), patch_size, patch_size, patch_size)\n",
"\n",
"\n",
"print(lattice_patches.shape)\n",
"print(mask_patches.shape)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"num_patches = lattice_patches.shape[0]\n",
"\n",
"for k in range(0, num_patches):\n",
" x_file = lattice_patches[k].astype('uint16')\n",
" y_file = mask_patches[k].astype('uint16')\n",
" \n",
" metadata_x = dict(microscope='joh', shape=x_file.shape, dtype=x_file.dtype.str)\n",
" metadata_x = json.dumps(metadata_x)\n",
" \n",
" metadata_y = dict(microscope='joh', shape=y_file.shape, dtype=y_file.dtype.str)\n",
" metadata_y = json.dumps(metadata_y)\n",
" \n",
" os.mkdir(split_directory+\"train/Region\"+str(k)+\"/\")\n",
" skimage.external.tifffile.imsave(split_directory+\"train/Region\"+str(k)+\"/\"+\"lattice_light_sheet.tif\", x_file, description=metadata_x)\n",
" skimage.external.tifffile.imsave(split_directory+\"train/Region\"+str(k)+\"/\"+\"truth.tif\", y_file, description=metadata_y)\n",
" "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.3"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Loading