Skip to content
Real-Time Video Segmentation on Mobile Devices with DeepLab V3+, MobileNet V2
Branch: master
Clone or download
Taekmin Kim Taekmin Kim
Taekmin Kim and Taekmin Kim Edit README
Latest commit 892bd33 Jan 27, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
android Edit package name Jul 28, 2018
docs Edit README.md Jul 26, 2018
LICENSE
README.md

README.md

JejuNet

Real-Time Video Segmentation on Mobile Devices

Keywords

Video Segmentation, Mobile, Tensorflow Lite

Tutorials
  • Benchmarks: Tensorflow Lite on GPU
    • A Post on Medium Link
    • Detail results Link

Introduction

Running vision tasks such as object detection, segmentation in real time on mobile devices. Our goal is to implement video segmentation in real time at least 24 fps on Google Pixel 2. We use effiicient deep learning netwrok specialized in mobile/embedded devices and exploit data redundancy between consecutive frames to reduce unaffordable computational cost. Moreover, the network can be optimized with 8-bits quantization provided by tf-lite.

Real-Time Video Segmentation(Credit: Google AI)

Example: Reai-Time Video Segmentation(Credit: Google AI)

Architecture

Video Segmentation

Optimization

Experiments

  • Video Segmentation on Google Pixel 2
  • Datasets
    • PASCAL VOC 2012

Plan @Deep Learning Camp Jeju 2018

July, 2018

  • DeepLabv3+ on tf-lite
  • Use data redundancy between frames
  • Optimization
    • Quantization
    • Reduce the number of layers, filters and input size

Results

More results here bit.ly/jejunet-output

Demo

DeepLabv3+ on tf-lite

Video Segmentation on Google Pixel 2

Trade-off Between Speed(FPS) and Accuracy(mIoU)

Trade-off Between Speed(FPS) and Accuracy(mIoU)

Low Bits Quantization

Network Input Stride Quantization(w/a) PASCAL mIoU Runtime(.tflite) File Size(.tflite)
DeepLabv3, MobileNetv2 512x512 16 32/32 79.9% 862ms 8.5MB
DeepLabv3, MobileNetv2 512x512 16 8/8 79.2% 451ms 2.2MB
DeepLabv3, MobileNetv2 512x512 16 6/6 70.7% - -
DeepLabv3, MobileNetv2 512x512 16 6/4 30.3% - -

Low Bits Quantization

References

  1. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation

    Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, Hartwig Adam. arXiv: 1802.02611.

    [link]. arXiv: 1802.02611, 2018.

  2. Inverted Residuals and Linear Bottlenecks: Mobile Networks for Classification, Detection and Segmentation
    Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen
    [link]. arXiv:1801.04381, 2018.

Authors

Acknowledgement

This work was partially supported by Deep Learning Jeju Camp and sponsors such as Google, SK Telecom. Thank you for the generous support for TPU and Google Pixel 2, and thank Hyungsuk and all the mentees for tensorflow impelmentations and useful discussions.

License

© Taekmin Kim, 2018. Licensed under the MIT License.

You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.