Member : Satoshi Kume, Norio Kobayashi, Hiroshi Masuya
Description :
- To discuss the metadata description (phenotypes/morphology) for ROI &/or masked region.
- To try the development of the supporting system of metadata annotation for the insight view of images using the machine learning.
- To consider an effective amplification of training data from a few dataset.
Machine
- PC : HPCT W111ga
- CPU : Intel Skylake CPU W-2123 (3.60 GHz, 4Core)
- GPU : NVIDIA TITAN RTX (GDDR6 24GB) x 2
- Memory : 128 GB
OS / Software
- OS : CentOS Linux 7.6.1810
- NVIDIA Driver : 418.67 / gcc : 4.8.5
- CUDA : V9.0.176
- Rstudio (R version 3.6.0), R-Keras 2.2.4, R-TensorFlow 1.11.0 (Backend)
-
Image dataset
- Mouse B6J kidney electron microscopy images
- Nucleus
- Mitochondria
- Croped images around 1000 x 1000 px
- Mouse B6J kidney electron microscopy images
-
Pre-processing
- Resize for images: 512 - 1024 px square
- Normalization
- Clahe (Contrast Limited Adaptive Histogram Equalization)
- Gamma Correct (this is not so important)
- Training image amplification : This step was skipped in BH19 due to time consuming.
- Rotation : 0, 90, 180, 270 degree
- Flip : Y/N
- Horizontal translation : 1/8-7/8 tick
- Vertical translation : 1/8-7/8 tick
- RandomSequence of images
library(random) Ran <- c(random::randomSequence(min=1, max=length(XYG$X), col=1))
- list2tensor
list2tensor <- function(xList) { xTensor <- simplify2array(xList) aperm(xTensor, c(4, 1, 2, 3)) }
-
Deep learning model
-
Model
-
Evaluation metrics
- IoU (Intersection-Over-Union)
iou <- function(y_true, y_pred, smooth = 1.0){ y_true_f <- k_flatten(y_true) y_pred_f <- k_flatten(y_pred) intersection <- k_sum( y_true_f * y_pred_f) union <- k_sum( y_true_f + y_pred_f ) - intersection result <- (intersection + smooth) / ( union + smooth) return(result)}
- Dice Coefficient (F1 score)
dice_coef <- function(y_true, y_pred, smooth = 1.0) { y_true_f <- k_flatten(y_true) y_pred_f <- k_flatten(y_pred) intersection <- k_sum(y_true_f * y_pred_f) result <- (2 * intersection + smooth) / (k_sum(y_true_f) + k_sum(y_pred_f) + smooth) return(result)}
-
Prameter tuning
FLAGS <- flags( flag_numeric("kernel_size", 3), flag_numeric("nlevels", 3), flag_numeric("nfilters", 128), flag_numeric("BatchSize", 4), flag_numeric("dropout1", 0.1), flag_numeric("dropout2", 0.1), flag_numeric("dropout3", 0.1))
-
Prameter tuning - Learning rate
- 1st
lr_schedule <- function(epoch, lr) { if(epoch <= 10) { 0.01 } else if(epoch > 10 && epoch <= 50){ 0.001 } else if(epoch > 50 && epoch <= 75){ 0.0001 } else if(epoch > 75 && epoch <= 100){ 0.00001 } else if(epoch > 100 && epoch <= 200){ 0.000001 } else { 0.000001 }} lr_reducer <- callback_learning_rate_scheduler(lr_schedule)
2. 2nd
lr_schedule <- function(epoch, lr) { if(epoch <= 25) { 0.001 } else if(epoch > 25 && epoch <= 50){ 0.0001 } else if(epoch > 50 && epoch <= 75){ 0.00001 } else { 0.00001 }}
3. 3rd 4. 4th
-
-
Image Dataset 1. Training images : 44 images 2. Cheching images during training : 5 images
-
Calculation
-
Evaluation and modification cycle of results
-
Particle shape