Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion 11_deep_learning/01-Image-Restoration.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
"\n",
">conda create -n 'dl-biapol' python=3.7 \n",
"conda activate dl-biapol \n",
"pip install tensorflow-gpu=2.4.1 keras=2.3.1 n2v jupyter scikit-image gputools\n",
"pip install tensorflow-gpu==2.4.1 keras==2.3.1 n2v jupyter scikit-image gputools\n",
"\n",
"Finally open this notebook using `jupyter notebook`\n",
"\n",
Expand Down
66 changes: 33 additions & 33 deletions 11_deep_learning/02-Image-Semantic-Segmentation.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,15 @@
"cells": [
{
"cell_type": "markdown",
"id": "df043420",
"id": "61a163c8",
"metadata": {},
"source": [
"### This notebook is adapted from https://github.com/dl4mia/04_instance_segmentation/blob/main/1_semantic_segmentation_2D.ipynb"
"### This notebook is adapted from **https://github.com/dl4mia/04_instance_segmentation/blob/main/1_semantic_segmentation_2D.ipynb**"
]
},
{
"cell_type": "markdown",
"id": "f42b61ff",
"id": "8d8bb293",
"metadata": {},
"source": [
"In this notebook, we will perform pixel-wise segmentation or <i> semantic segmentation </i> on some microscopy images using a standard model architecture called the U-Net. \n",
Expand All @@ -24,7 +24,7 @@
"\n",
">conda create -n 'dl-biapol' python=3.7 \n",
"conda activate dl-biapol \n",
"pip install tensorflow-gpu=2.4.1 keras=2.3.1 n2v jupyter scikit-image gputools\n",
"pip install tensorflow-gpu==2.4.1 keras==2.3.1 n2v jupyter scikit-image gputools\n",
"\n",
"Finally open this notebook using `jupyter notebook`\n",
"\n",
Expand All @@ -37,7 +37,7 @@
},
{
"cell_type": "markdown",
"id": "3c078f0c",
"id": "aae6c93c",
"metadata": {},
"source": [
"### Get Dependencies"
Expand All @@ -46,7 +46,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "88cf29aa",
"id": "787ee718",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -71,15 +71,15 @@
},
{
"cell_type": "markdown",
"id": "22059655",
"id": "eaa7d193",
"metadata": {},
"source": [
"### Data"
]
},
{
"cell_type": "markdown",
"id": "62cdd825",
"id": "2f491c62",
"metadata": {},
"source": [
"> First we download some sample images and corresponding masks"
Expand All @@ -88,7 +88,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "0b96ab62",
"id": "880568ef",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -103,7 +103,7 @@
},
{
"cell_type": "markdown",
"id": "57af5782",
"id": "2f8365a6",
"metadata": {},
"source": [
"> Next we load the data, generate from the annotation masks background/foreground/cell border masks, and crop out a central patch (this is just for simplicity, as it makes our life a bit easier when all images have the same shape)\n"
Expand All @@ -112,7 +112,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "652b1f9c",
"id": "b7eadac0",
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -140,7 +140,7 @@
},
{
"cell_type": "markdown",
"id": "d92f9390",
"id": "5ddf35bf",
"metadata": {},
"source": [
"<div class=\"alert alert-block alert-info\"> Q:<b> What does <u>one-hot</u> parameter in the <i>to_3class_label</i> function imply? </b>.<br>\n",
Expand All @@ -149,7 +149,7 @@
},
{
"cell_type": "markdown",
"id": "8380363d",
"id": "ada06194",
"metadata": {},
"source": [
"> Plot an example image"
Expand All @@ -158,7 +158,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "4be38ec6",
"id": "1aaf08dc",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -174,7 +174,7 @@
},
{
"cell_type": "markdown",
"id": "04790b83",
"id": "bbda6ec7",
"metadata": {},
"source": [
"<div class=\"alert alert-block alert-info\"> Q:<b> Plot some more images. What kind of data is shown? How variable is it? Do the segmentation masks look reasonable? </b>.<br>\n",
Expand All @@ -183,7 +183,7 @@
},
{
"cell_type": "markdown",
"id": "d5218dc1",
"id": "87a7fd5d",
"metadata": {},
"source": [
"> We now split the training data into ~ 80/20 training and validation data"
Expand All @@ -192,7 +192,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "12418fa0",
"id": "5839b26f",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -215,7 +215,7 @@
},
{
"cell_type": "markdown",
"id": "bee6ea5e",
"id": "4c07ff28",
"metadata": {},
"source": [
"> We now will construct a very simple 3-class segmentation model, for which we will use a UNet"
Expand All @@ -224,7 +224,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "f7d22f6f",
"id": "76bbfc4f",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -237,7 +237,7 @@
},
{
"cell_type": "markdown",
"id": "3b608d93",
"id": "9dd9c3ab",
"metadata": {},
"source": [
"<div class=\"alert alert-block alert-info\"> Q:<b> What is the intuition behind the <u>skip connections</u> in a U-Net? </b> <br>\n",
Expand All @@ -248,15 +248,15 @@
},
{
"cell_type": "markdown",
"id": "9f4cd8c3",
"id": "ba979c13",
"metadata": {},
"source": [
"### Training the model"
]
},
{
"cell_type": "markdown",
"id": "0efb7625",
"id": "a5d2df1a",
"metadata": {},
"source": [
"> We now will compile the model, i.e. deciding on a loss function and an optimizer. \n",
Expand All @@ -267,7 +267,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "e4a8f340",
"id": "815eeb88",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -276,7 +276,7 @@
},
{
"cell_type": "markdown",
"id": "df168f33",
"id": "fd9f4ca3",
"metadata": {},
"source": [
"> Before we train the model, we define some callbacks that will monitor the training loss etc"
Expand All @@ -285,7 +285,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "a409f9b8",
"id": "528db37e",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -304,7 +304,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "4975ba35",
"id": "b1bf7e60",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -314,7 +314,7 @@
},
{
"cell_type": "markdown",
"id": "8aa07946",
"id": "d1dc9762",
"metadata": {},
"source": [
"### Prediction on unseen data"
Expand All @@ -323,7 +323,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "7ab81b7d",
"id": "3c614d67",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -337,7 +337,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "5445d7b6",
"id": "3cbb2fa3",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -348,7 +348,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "51684a40",
"id": "53e32506",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -373,7 +373,7 @@
},
{
"cell_type": "markdown",
"id": "391387aa",
"id": "7a183229",
"metadata": {},
"source": [
"<div class=\"alert alert-block alert-info\"> Q:<b> Can you spot the label image mistakes? </b> </div>"
Expand All @@ -382,15 +382,15 @@
{
"cell_type": "code",
"execution_count": null,
"id": "a9d251f8",
"id": "eb4d56f3",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"id": "f1753517",
"id": "72786feb",
"metadata": {},
"outputs": [],
"source": []
Expand Down
Binary file added 11_deep_learning/Slides.pdf
Binary file not shown.