Skip to content
Tutorials on Quantized Neural Network using Tensorflow Lite
Jupyter Notebook Python
Branch: master
Clone or download
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
images
models/mobilenet_v1/graphviz
training
.gitignore
01_Intro_to_TFlite.ipynb
02_Quantization_Basics.ipynb
03_Quantizing_Tensorflow_Graph.ipynb
04_Custom_Gradients.ipynb
05_Training.ipynb
06_Batchnorm_Folding.ipynb
07_Efficient_Integer_Inference.ipynb
LICENSE
README.md
imagenet_classes.py
utils.py

README.md

QNN

Quantized Neural Network

Traditionally, deep learning uses single precision floating point (float32) data type. Recent researches show that using lower precision say half precision floating point (float16) or even unsigned 8-bit integer (uint8) doesn’t impact the neural network accuracy significantly. Although there are now tonnes of tutorials in using machine learning framework like Tensorflow but I couldn't find many tutorials on Tensorflow Lite or quantization. Therefore, I decided to write some tutorials to explains quantization and fast inference.

You don’t need to have prior knowledge in quantization but I do expect you be familiar with Tensorflow and basic of deep neural network. In this tutorials I will use TensorflowLite in Tensorflow 1.10 and Python 3.

You can’t perform that action at this time.