Skip to content

Qengineering/ARMnn_Quantize_RPi_32-bits

Repository files navigation

ARMnn_Quantize_RPi_32-bits

ARMnn TensorFlow Lite classification for the Raspberry Pi 4

A C++ implementation of ARMnn (ARM Neural Network framework) classification with a TensorFlow Lite model on a Raspberry Pi 4. Once overclocked to 2000 MHz, your app runs a disappointing 4.6 FPS without any hardware accelerator.

https://arxiv.org/pdf/1712.05877.pdf
Training set: COCO with 1000 objects
Size: 224x224
Frame rate Mobile_V1 Lite : 4.6 FPS (RPi 4 @ 2000 MHz - 32 bits OS)

Special made for a bare Raspberry Pi see: https://qengineering.eu/install-armnn-on-raspberry-pi-4.html

To extract and run the network in Code::Blocks
$ mkdir MyDir
$ cd MyDir
$ wget https://github.com/Qengineering/ARMnn_Quantize_RPi_32-bits/archive/master.zip
$ unzip -j master.zip
Remove master.zip and README.md as they are no longer needed.
$ rm master.zip
$ rm README.md

Your MyDir folder must now look like this:
schoolbus.jpg
grace_hopper.bmp
labels.txt
TestARMnnMobileNetV1_Quant.cpb
mobilenetv1_quant_tflite.cpp
model_output_labels_loader.hpp

Next, choose your model from TensorFlow: https://www.tensorflow.org/lite/guide/hosted_models
Download a quantized model, extract the .tflite from the tarball and place it in your MyDir.

Now your MyDir folder may contain: mobilenet_v1_1.0_224_quant.tflite.
Or: inception_v4_299_quant.tflite. Or both of course.

Run TestTensorFlow_Lite.cpb with Code::Blocks.
Give the .tflite file of your choice and the image to be tested as command line parameter

output image

Remember, you also need a working OpenCV 4 on your Raspberry.
Preferably use our installation: https://qengineering.eu/install-opencv-4.3-on-raspberry-pi-4.html

output image