diff --git a/11_deep_learning/01-Image-Restoration.ipynb b/11_deep_learning/01-Image-Restoration.ipynb index 0364a8a..9c7d378 100644 --- a/11_deep_learning/01-Image-Restoration.ipynb +++ b/11_deep_learning/01-Image-Restoration.ipynb @@ -16,7 +16,7 @@ "\n", ">conda create -n 'dl-biapol' python=3.7 \n", "conda activate dl-biapol \n", - "pip install tensorflow-gpu=2.4.1 keras=2.3.1 n2v jupyter scikit-image gputools\n", + "pip install tensorflow-gpu==2.4.1 keras==2.3.1 n2v jupyter scikit-image gputools\n", "\n", "Finally open this notebook using `jupyter notebook`\n", "\n", diff --git a/11_deep_learning/02-Image-Semantic-Segmentation.ipynb b/11_deep_learning/02-Image-Semantic-Segmentation.ipynb index bf8a876..62a3b1e 100644 --- a/11_deep_learning/02-Image-Semantic-Segmentation.ipynb +++ b/11_deep_learning/02-Image-Semantic-Segmentation.ipynb @@ -2,15 +2,15 @@ "cells": [ { "cell_type": "markdown", - "id": "df043420", + "id": "61a163c8", "metadata": {}, "source": [ - "### This notebook is adapted from https://github.com/dl4mia/04_instance_segmentation/blob/main/1_semantic_segmentation_2D.ipynb" + "### This notebook is adapted from **https://github.com/dl4mia/04_instance_segmentation/blob/main/1_semantic_segmentation_2D.ipynb**" ] }, { "cell_type": "markdown", - "id": "f42b61ff", + "id": "8d8bb293", "metadata": {}, "source": [ "In this notebook, we will perform pixel-wise segmentation or semantic segmentation on some microscopy images using a standard model architecture called the U-Net. \n", @@ -24,7 +24,7 @@ "\n", ">conda create -n 'dl-biapol' python=3.7 \n", "conda activate dl-biapol \n", - "pip install tensorflow-gpu=2.4.1 keras=2.3.1 n2v jupyter scikit-image gputools\n", + "pip install tensorflow-gpu==2.4.1 keras==2.3.1 n2v jupyter scikit-image gputools\n", "\n", "Finally open this notebook using `jupyter notebook`\n", "\n", @@ -37,7 +37,7 @@ }, { "cell_type": "markdown", - "id": "3c078f0c", + "id": "aae6c93c", "metadata": {}, "source": [ "### Get Dependencies" @@ -46,7 +46,7 @@ { "cell_type": "code", "execution_count": null, - "id": "88cf29aa", + "id": "787ee718", "metadata": {}, "outputs": [], "source": [ @@ -71,7 +71,7 @@ }, { "cell_type": "markdown", - "id": "22059655", + "id": "eaa7d193", "metadata": {}, "source": [ "### Data" @@ -79,7 +79,7 @@ }, { "cell_type": "markdown", - "id": "62cdd825", + "id": "2f491c62", "metadata": {}, "source": [ "> First we download some sample images and corresponding masks" @@ -88,7 +88,7 @@ { "cell_type": "code", "execution_count": null, - "id": "0b96ab62", + "id": "880568ef", "metadata": {}, "outputs": [], "source": [ @@ -103,7 +103,7 @@ }, { "cell_type": "markdown", - "id": "57af5782", + "id": "2f8365a6", "metadata": {}, "source": [ "> Next we load the data, generate from the annotation masks background/foreground/cell border masks, and crop out a central patch (this is just for simplicity, as it makes our life a bit easier when all images have the same shape)\n" @@ -112,7 +112,7 @@ { "cell_type": "code", "execution_count": null, - "id": "652b1f9c", + "id": "b7eadac0", "metadata": {}, "outputs": [], "source": [ @@ -140,7 +140,7 @@ }, { "cell_type": "markdown", - "id": "d92f9390", + "id": "5ddf35bf", "metadata": {}, "source": [ "
Q: What does one-hot parameter in the to_3class_label function imply? .
\n", @@ -149,7 +149,7 @@ }, { "cell_type": "markdown", - "id": "8380363d", + "id": "ada06194", "metadata": {}, "source": [ "> Plot an example image" @@ -158,7 +158,7 @@ { "cell_type": "code", "execution_count": null, - "id": "4be38ec6", + "id": "1aaf08dc", "metadata": {}, "outputs": [], "source": [ @@ -174,7 +174,7 @@ }, { "cell_type": "markdown", - "id": "04790b83", + "id": "bbda6ec7", "metadata": {}, "source": [ "
Q: Plot some more images. What kind of data is shown? How variable is it? Do the segmentation masks look reasonable? .
\n", @@ -183,7 +183,7 @@ }, { "cell_type": "markdown", - "id": "d5218dc1", + "id": "87a7fd5d", "metadata": {}, "source": [ "> We now split the training data into ~ 80/20 training and validation data" @@ -192,7 +192,7 @@ { "cell_type": "code", "execution_count": null, - "id": "12418fa0", + "id": "5839b26f", "metadata": {}, "outputs": [], "source": [ @@ -215,7 +215,7 @@ }, { "cell_type": "markdown", - "id": "bee6ea5e", + "id": "4c07ff28", "metadata": {}, "source": [ "> We now will construct a very simple 3-class segmentation model, for which we will use a UNet" @@ -224,7 +224,7 @@ { "cell_type": "code", "execution_count": null, - "id": "f7d22f6f", + "id": "76bbfc4f", "metadata": {}, "outputs": [], "source": [ @@ -237,7 +237,7 @@ }, { "cell_type": "markdown", - "id": "3b608d93", + "id": "9dd9c3ab", "metadata": {}, "source": [ "
Q: What is the intuition behind the skip connections in a U-Net?
\n", @@ -248,7 +248,7 @@ }, { "cell_type": "markdown", - "id": "9f4cd8c3", + "id": "ba979c13", "metadata": {}, "source": [ "### Training the model" @@ -256,7 +256,7 @@ }, { "cell_type": "markdown", - "id": "0efb7625", + "id": "a5d2df1a", "metadata": {}, "source": [ "> We now will compile the model, i.e. deciding on a loss function and an optimizer. \n", @@ -267,7 +267,7 @@ { "cell_type": "code", "execution_count": null, - "id": "e4a8f340", + "id": "815eeb88", "metadata": {}, "outputs": [], "source": [ @@ -276,7 +276,7 @@ }, { "cell_type": "markdown", - "id": "df168f33", + "id": "fd9f4ca3", "metadata": {}, "source": [ "> Before we train the model, we define some callbacks that will monitor the training loss etc" @@ -285,7 +285,7 @@ { "cell_type": "code", "execution_count": null, - "id": "a409f9b8", + "id": "528db37e", "metadata": {}, "outputs": [], "source": [ @@ -304,7 +304,7 @@ { "cell_type": "code", "execution_count": null, - "id": "4975ba35", + "id": "b1bf7e60", "metadata": {}, "outputs": [], "source": [ @@ -314,7 +314,7 @@ }, { "cell_type": "markdown", - "id": "8aa07946", + "id": "d1dc9762", "metadata": {}, "source": [ "### Prediction on unseen data" @@ -323,7 +323,7 @@ { "cell_type": "code", "execution_count": null, - "id": "7ab81b7d", + "id": "3c614d67", "metadata": {}, "outputs": [], "source": [ @@ -337,7 +337,7 @@ { "cell_type": "code", "execution_count": null, - "id": "5445d7b6", + "id": "3cbb2fa3", "metadata": {}, "outputs": [], "source": [ @@ -348,7 +348,7 @@ { "cell_type": "code", "execution_count": null, - "id": "51684a40", + "id": "53e32506", "metadata": {}, "outputs": [], "source": [ @@ -373,7 +373,7 @@ }, { "cell_type": "markdown", - "id": "391387aa", + "id": "7a183229", "metadata": {}, "source": [ "
Q: Can you spot the label image mistakes?
" @@ -382,7 +382,7 @@ { "cell_type": "code", "execution_count": null, - "id": "a9d251f8", + "id": "eb4d56f3", "metadata": {}, "outputs": [], "source": [] @@ -390,7 +390,7 @@ { "cell_type": "code", "execution_count": null, - "id": "f1753517", + "id": "72786feb", "metadata": {}, "outputs": [], "source": [] diff --git a/11_deep_learning/Slides.pdf b/11_deep_learning/Slides.pdf new file mode 100644 index 0000000..759a7be Binary files /dev/null and b/11_deep_learning/Slides.pdf differ