Skip to content

VNOpenAI/skin-lesion-segmentation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Skin Lesion Segmentation

Implementation of U-Net / DoubleU-Net for lesion boundary Segmentation (ISIC 2018-task 1).

This implementation was integrated into VN AIDr - A medical image processing opensource. Documentation.

TODO

  • Build model U-Net.

  • Build model DoubleU-Net.

  • Write code for Dice loss.

  • Write code for Jaccard-index (mean Intersection of Union).

  • Augment data.

  • Implement training code and data-preprocessing code.

  • Implement demo code.

  • Convert model to onnx format.

  • Add pre-traind models.

Preprequisites

Architecture

1. U-net

I modified architecture of U-Net to use with image size 192x256x3, the same size implemented in DoubleU-Net paper.

U-Net architecture

2,Double-net

DoubleU-Net includes two sub-networks, look alike two U-Net concatenated.

Input is fed into modified U-Net and then generate Output1. Output1 has the same size as input image. The sub-network 2 is for fine-grained proposal. It was built from scratch with the same idea as U-Net. However, in the decoder of sub-network 2, skip_connection from encoder1 is fed into.

At the end the Output1 and Output2 was conatenated in channel axis. So we can get one of those for prediction. In original paper, author showed that Output1 and Output2 had the same result.

DoubleU-Net architecture.

Training

Data

There are two common ways to augment data:

  • Offline augmentation: Generate augmented images before training.

  • Online augmentation: Generate augmented images during training progress.

To reduce training time, I chosen the first way.

Download raw data from [5]. for your convenience, I splited, augmented data and stored in link [6]. Download and put them in the same folder with your code.

Your directory structure will be:

Unet-and-double-Unet-implementation
├──data_augmented
│    ├── mask/
│    ├── image/
├──validation
│    ├── mask/
│    ├── image/
├──image
│    ├── demo2.png
│    ├── demo3.png
│    ├── DoubleU-net_Architecture.png
│    ├── Unet_Architecture.png
├──.gitignore
├──README.md
├──data.py
├──metrics.py
├──model.py
├──predict.py
├──requirements.txt
├──train.py9
├──utils.py

###

Train your model:

!python train.py

Your model will be stored in folder checkpoint after every epoch. I also provide pre-trained model in [7].

Result

demo1

demo2

References:

[1] Origin paper: DoubleU-Net: A Deep Convolutional Neural Network for Medical Image Segmentation

[2] ASPP block :DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs

[3] Squeeze-and-Excitation block: Squeeze-and-Excitation Networks

[4] Repository 2020-CBMS-DoubleU-Net

[5] Data: ISIC2018_task1 Lesion Boundary Segmentaion

[6] My data after augmented: link here

[7] Pre-train model :[link here]

Contact

If you find any mistakes in my work, please contact me, I am really grateful.

Thanks for your interest!

Releases

No releases published

Packages

No packages published

Languages