Real-Time Video Segmentation on Mobile Devices
Video Segmentation, Mobile, Tensorflow Lite
Running vision tasks such as object detection, segmentation in real time on mobile devices. Our goal is to implement video segmentation in real time at least 24 fps on Google Pixel 2. We use effiicient deep learning netwrok specialized in mobile/embedded devices and exploit data redundancy between consecutive frames to reduce unaffordable computational cost. Moreover, the network can be optimized with 8-bits quantization provided by tf-lite.
Example: Reai-Time Video Segmentation(Credit: Google AI)
- 8-bits Quantization on TensorFlow Lite
- Video Segmentation on Google Pixel 2
- PASCAL VOC 2012
Plan @Deep Learning Camp Jeju 2018
- DeepLabv3+ on tf-lite
- Use data redundancy between frames
- Reduce the number of layers, filters and input size
More results here bit.ly/jejunet-output
Video Segmentation on Google Pixel 2
Trade-off Between Speed(FPS) and Accuracy(mIoU)
Low Bits Quantization
|Network||Input||Stride||Quantization(w/a)||PASCAL mIoU||Runtime(.tflite)||File Size(.tflite)|
Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation
Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, Hartwig Adam. arXiv: 1802.02611.
[link]. arXiv: 1802.02611, 2018.
Inverted Residuals and Linear Bottlenecks: Mobile Networks for Classification, Detection and Segmentation
Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen
[link]. arXiv:1801.04381, 2018.
This work was partially supported by Deep Learning Jeju Camp and sponsors such as Google, SK Telecom. Thank you for the generous support for TPU and Google Pixel 2, and thank Hyungsuk and all the mentees for tensorflow impelmentations and useful discussions.