Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
395 changes: 395 additions & 0 deletions 11_deep_learning/01-Image-Restoration.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,395 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### This notebook is adapted from **https://github.com/juglab/n2v**\n",
"In this notebook, we will denoise some Scanning Electron Microscopy Images using an approach called Noise2Void. \n",
"Through this notebook, you will see the complete workflow of an DL approach - (i) data preparation, followed by (ii) training the model and finally (iii) applying the trained model on the test, microscopy images. \n",
"You will find some questions (indicated as `Q`s) for discussion in blue boxes in this notebook :) \n",
"<hr>\n",
"Prior to running this notebook, you need to have the correct environment configured. \n",
"\n",
"#### If not running on google colab\n",
"Open a fresh terminal window and run the following commands:\n",
"\n",
">conda create -n 'dl-biapol' python=3.7 \n",
"conda activate dl-biapol \n",
"pip install tensorflow-gpu=2.4.1 keras=2.3.1 n2v jupyter scikit-image gputools\n",
"\n",
"Finally open this notebook using `jupyter notebook`\n",
"\n",
"#### If running on google colab\n",
"Go to `File>Upload Notebook` and drag and drop this notebook. \n",
"Go to `Runtime > Change Runtime Type > Hardware Accelerator = GPU` \n",
"Create an empty cell following this one and run:\n",
">!pip install tensorflow-gpu==2.4.1 keras==2.3.1 n2v scikit-image gputools\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Get imports"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
">We import all our dependencies."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from n2v.models import N2VConfig, N2V\n",
"import numpy as np\n",
"from csbdeep.utils import plot_history\n",
"from n2v.utils.n2v_utils import manipulate_val_data\n",
"from n2v.internals.N2V_DataGenerator import N2V_DataGenerator\n",
"from matplotlib import pyplot as plt\n",
"from tifffile import imread\n",
"import urllib, os, zipfile, ssl\n",
"ssl._create_default_https_context = ssl._create_unverified_context"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Download example data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
">Data by Reza Shahidi and Gaspar Jekely, Living Systems Institute, Exeter. \n",
"You could try opening the <i> train.tif </i> and <i> validation.tif </i> in Fiji."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# create a folder for our data.\n",
"if not os.path.isdir('./data'):\n",
" os.mkdir('./data')\n",
"\n",
"# check if data has been downloaded already\n",
"zipPath=\"data/SEM.zip\"\n",
"if not os.path.exists(zipPath):\n",
" #download and unzip data\n",
" data = urllib.request.urlretrieve('https://download.fht.org/jug/n2v/SEM.zip', zipPath)\n",
" with zipfile.ZipFile(zipPath, 'r') as zip_ref:\n",
" zip_ref.extractall(\"data\")\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Training Data Preparation"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
">For training we load one set of low-SNR images and use the `N2V_DataGenerator` to extract training `X` and validation `X_val` patches. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"datagen = N2V_DataGenerator()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
">We load all the `.tif` files from the `data` directory. \n",
"The function will return a list of images (numpy arrays)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"imgs = datagen.load_imgs_from_directory(directory = \"data/\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
">Let us look at the images. \n",
"We have to remove the added extra dimensions to display them as `2D` images."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"plt.imshow(imgs[0][0,...,0], cmap='magma')\n",
"plt.show()\n",
"\n",
"plt.imshow(imgs[1][0,...,0], cmap='magma')\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
">We will use the first image to extract training patches and store them in `X`. \n",
"We will use the second image to extract validation patches."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Training"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"patch_shape = (96,96)\n",
"X = datagen.generate_patches_from_list(imgs[:1], shape=patch_shape)\n",
"X_val = datagen.generate_patches_from_list(imgs[1:], shape=patch_shape)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<div class=\"alert alert-block alert-info\"> Q:<b> Why do you think input images chopped into smaller patches? </b>.<br>\n",
" What could be different schemes for extracting patches? \n",
"</div>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
">Using `N2VConfig` we specify some training parameters. \n",
"For example, `train_epochs` is set to $20$ to indicate that training runs for $20$ epochs. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"config = N2VConfig(X, unet_kern_size=3, \n",
" train_steps_per_epoch=int(X.shape[0]/128), train_epochs=20, train_loss='mse', batch_norm=True, \n",
" train_batch_size=128, n2v_perc_pix=0.198, n2v_patch_shape=(64, 64), \n",
" n2v_manipulator='uniform_withCP', n2v_neighborhood_radius=5)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
">`model_name` is used to identify the model. \n",
"`basedir` is used to specify where the trained weights are saved. \n",
"We shall now create our network model by using the `N2V` method."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<div class=\"alert alert-block alert-info\"> Q: <b> Can you identify what is the default learning rate used while training the N2V model? </b>.<br>\n",
"<u> HINT </u> Pressing <i> Shift + Tab </i> shows the docstring for a given function\n",
"</div>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<div class=\"alert alert-block alert-info\"> Q: <b> Is it advantageous to set a high batch size? How about a low batch size? </b><br> \n",
"Also, discuss what would setting <i>BatchNorm = True</i> imply? \n",
"</div>\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model_name = 'n2v_2D'\n",
"basedir = 'models'\n",
"model = N2V(config, model_name, basedir=basedir)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
">Running `model.train` will begin the training for $20$ epochs. \n",
"Training the model will likely take some time. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"history = model.train(X, X_val)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<div class=\"alert alert-block alert-info\"> Q: <b> What do you think is the difference between <i>n2v_mse</i> and <i>n2v_abs</i>? </b>.<br>\n",
"Also, the last line which is printed out is <i>Loading network weights from 'weights_best.h5'</i>. What defines the <u> best </u> state of the model?\n",
"</div>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Inference"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
">We load the data we want to process within `input_val`. \n",
"The parameter `n_tiles` can be used if images are to big for the GPU memory. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"input_val = imread('data/validation.tif')\n",
"pred_val = model.predict(input_val, axes='YX')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
">Let's see results on the validation data. (First we look at the complete image and then we look at a zoomed-in view of the image)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"plt.figure(figsize=(16,8))\n",
"plt.subplot(1,2,1)\n",
"plt.imshow(input_val,cmap=\"magma\")\n",
"plt.title('Input');\n",
"plt.subplot(1,2,2)\n",
"plt.imshow(pred_val,cmap=\"magma\")\n",
"plt.title('Prediction');"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"plt.figure(figsize=(16,8))\n",
"plt.subplot(1,2,1)\n",
"plt.imshow(input_val[200:300, 200:300],cmap=\"magma\")\n",
"plt.title('Input - Zoomed in');\n",
"plt.subplot(1,2,2)\n",
"plt.imshow(pred_val[200:300, 200:300],cmap=\"magma\")\n",
"plt.title('Prediction - Zoomed in');"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<div class=\"alert alert-block alert-info\"> Q: <b> We demonstrated microscopy image denoising with this N2V notebook where we first trained a model and later applied the trained model on validation images. </b> <br>\n",
" Would you call this a <i>supervised</i> learning approach, an <i>unsupervised</i> learning approach or something else? Discuss! :)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.13"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": true,
"sideBar": false,
"skip_h1_title": false,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
}
},
"nbformat": 4,
"nbformat_minor": 4
}
Loading