Real-Time Semantic Segmentation in Mobile device
Branch: master
Clone or download
Akira Sosa Akira Sosa
Akira Sosa and Akira Sosa revert
Latest commit 4879e76 Nov 16, 2018
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
assets Add README Nov 1, 2017
nets Update README Nov 9, 2018
tmp fix getting image files Nov 9, 2018
weights fix getting image files Nov 9, 2018
.gitignore do not use color jitter on mask Nov 16, 2018
LICENSE Add README Nov 1, 2017
README.md Pretrained model on README Nov 10, 2018
config.py use pytorch Nov 9, 2018
coreml_converter.py Update README Nov 9, 2018
dataset.py fix maskdataset Nov 16, 2018
eval_unet.ipynb notebooks Nov 9, 2018
eval_unet.py fix bug loading mask Nov 9, 2018
loss.py use pytorch Nov 9, 2018
train_unet.ipynb notebooks Nov 9, 2018
train_unet.py revert Nov 16, 2018
trainer.py use pytorch Nov 9, 2018

README.md

Real-Time Semantic Segmentation in Mobile device

This project is an example project of semantic segmentation for mobile real-time app.

The architecture is inspired by MobileNetV2 and U-Net.

LFW, Labeled Faces in the Wild, is used as a Dataset.

The goal of this project is to detect hair segments with reasonable accuracy and speed in mobile device. Currently, it achieves 0.89 IoU.

About speed vs accuracy, more details are available at my post.

Example of predicted image.

Example application

  • iOS
  • Android (TODO)

Requirements

  • PyTorch 0.4
  • CoreML for iOS app.

About Model

At this time, there is only one model in this repository, MobileUNet.py. As a typical U-Net architecture, it has encoder and decoder parts, which consist of depthwise conv blocks proposed by MobileNets.

Input image is encoded to 1/32 size, and then decoded to 1/2. Finally, it scores the results and make it to original size.

Steps to training

Data Preparation

Data is available at LFW. To get mask images, refer issue #11 for more. After you got images and masks, put the images of faces and masks as shown below.

data/
  raw/
    images/
      0001.jpg
      0002.jpg
    masks/
      0001.ppm
      0002.ppm

Training

If you use 224 x 224 as input size, pre-trained weight of MobileNetV2 is available. Download it from A PyTorch implementation of MobileNetV2 and put weight file under weights directory.

python train_unet.py \
  --img_size=224 \
  --pre_trained='weights/mobilenet_v2.pth.tar'

If you use other input sizes, the model will be trained from scratch.

python train_unet.py --img_size=192

Dice coefficient is used as a loss function.

Pretrained model

Input size IoU Download
224 0.89 Google Drive

Converting

As the purpose of this project is to make model run in mobile device, this repository contains some scripts to convert models for iOS and Android.

TBD

  • Report speed vs accuracy in mobile device.
  • Convert pytorch to Android using TesorFlow Light