Skip to content

Latest commit

 

History

History
144 lines (82 loc) · 9.02 KB

README.md

File metadata and controls

144 lines (82 loc) · 9.02 KB

EMDL

Embedded and mobile deep learning research notes

Docs

Paper

General

  1. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices [arXiv '17, Megvii]

  2. DeepMon: Mobile GPU-based Deep Learning Framework for Continuous Vision Applications [MobiSys '17]

  3. DeepEye: Resource Efficient Local Execution of Multiple Deep Vision Models using Wearable Commodity Hardware [MobiSys '17]

  4. MobiRNN: Efficient Recurrent Neural Network Execution on Mobile GPU [EMDL '17]

  5. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications [arXiv '17, Google]

  6. DeepSense: A GPU-based deep convolutional neural network framework on commodity mobile devices [WearSys '16]

  7. DeepX: A Software Accelerator for Low-Power Deep Learning Inference on Mobile Devices [IPSN '16]

  8. EIE: Efficient Inference Engine on Compressed Deep Neural Network [ISCA '16]

  9. MCDNN: An Approximation-Based Execution Framework for Deep Stream Processing Under Resource Constraints [MobiSys '16]

  10. DXTK: Enabling Resource-efficient Deep Learning on Mobile and Embedded Devices with the DeepX Toolkit [MobiCASE '16]

  11. Sparsification and Separation of Deep Learning Layers for Constrained Resource Inference on Wearables [SenSys ’16]

  12. An Early Resource Characterization of Deep Learning on Wearables, Smartphones and Internet-of-Things Devices [IoT-App ’15]

  13. CNNdroid: GPU-Accelerated Execution of Trained Deep Convolutional Neural Networks on Android [MM '16]

Quantization

  1. The ZipML Framework for Training Models with End-to-End Low Precision: The Cans, the Cannots, and a Little Bit of Deep Learning [ICML'17]
  2. Compressing Deep Convolutional Networks using Vector Quantization [arXiv'14]
  3. Quantized Convolutional Neural Networks for Mobile Devices [CVPR '16]
  4. Fixed-Point Performance Analysis of Recurrent Neural Networks [ICASSP'16]
  5. Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations [arXiv'16]
  6. Loss-aware Binarization of Deep Networks [ICLR'17]
  7. Towards the Limit of Network Quantization [ICLR'17]
  8. Deep Learning with Low Precision by Half-wave Gaussian Quantization [CVPR'17]
  9. ShiftCNN: Generalized Low-Precision Architecture for Inference of Convolutional Neural Networks [arXiv'17]

Pruning

  1. Learning both Weights and Connections for Efficient Neural Networks [NIPS'15]
  2. Pruning Filters for Efficient ConvNets [ICLR'17]
  3. Pruning Convolutional Neural Networks for Resource Efficient Inference [ICLR'17]
  4. Soft Weight-Sharing for Neural Network Compression [ICLR'17]
  5. Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding [ICLR'16]
  6. Dynamic Network Surgery for Efficient DNNs [NIPS'16]
  7. Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning [CVPR'17]
  8. ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression [ICCV'17]

Low Rank Approximation

  1. Efficient and Accurate Approximations of Nonlinear Convolutional Networks [CVPR'15]
  2. Accelerating Very Deep Convolutional Networks for Classification and Detection (Extended version of above one)
  3. Convolutional neural networks with low-rank regularization [arXiv'15]
  4. Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation [NIPS'14]
  5. Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications [ICLR'16]

Guide

  1. Squeezing Deep Learning Into Mobile Phones

  2. Deep Learning – Tutorial and Recent Trends

  3. Efficient Convolutional Neural Network Inference on Mobile GPUs

  4. Deep learning systems, UW course schedule(focused on systems design, not learning)

Code

General

  1. ARM-software/ComputeLibrary: The ARM Computer Vision and Machine Learning library is a set of functions optimised for both ARM CPUs and GPUs using SIMD technologies, Intro

  2. Apple CoreML

  3. Tencent/ncnn: ncnn is a high-performance neural network inference framework optimized for the mobile platform

  4. Microsoft Embedded Learning Library

OpenCL, Vulkan, RenderScript

  1. SaschaWillems/Vulkan: Examples and demos for the new Vulkan API

  2. ARM-software/vulkan-sdk: ARM Vulkan SDK

  3. alexhultman/libvc: Vulkan Compute for C++ (experimentation project)

  4. Deep Learning in a Single File for Smart Devices — mxnet

  5. TensorFlow Android Camera Demo

  6. bwasti/AICamera: Demonstration of using Caffe2 inside an Android application.

  7. mtmd/Mobile_ConvNet: RenderScript based implementation of Convolutional Neural Networks for Android phones

  8. harvardnlp/nmt-android: Neural Machine Translation on Android

  9. hollance/TensorFlow-iOS-Example: Source code for my blog post "Getting started with TensorFlow on iOS"

Tutorial

  1. ARM® Mali™ GPU OpenCL Developer Guide, pdf

  2. Optimal Compute on ARM MaliTM GPUs

  3. GPU Compute for Mobile Devices

  4. Compute for Mobile Devices Performance focused

  5. Hands On OpenCL

  6. Adreno OpenCL Programming Guide

  7. Better OpenCL Performance on Qualcomm Adreno GPU

Others

  1. mil-tokyo/webdnn: Fastest DNN Execution Framework on Web Browser

Hardware

GPU

  1. Bifrost GPU architecture and ARM Mali-G71 GPU

  2. Midgard GPU Architecture, ARM Mali-T880 GPU

  3. Mobile GPU market share

Driver

  1. [Adreno] csarron/qcom_vendor_binaries: Common Proprietary Qualcomm Binaries
  2. [Mali] Fevax/vendor_samsung_hero2ltexx: Blobs from s7 Edge G935F