Skip to content
Tensorflow model export from Python to C++ and inference without using TF library
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
cnpy @ 4e8810b
loaders
model-export
parsing
scripts
test
vendors
yannpp @ 3f5303d
.gitmodules
.travis.yml
README.md
TinyImgNetClient.pro
appveyor.yml
main.cpp

README.md

Tensorflow-Model-Inference

Build Status Build status

With this project you can train your models using Tensorflow library on python, and export trained model to your C++ project to use it without including official TF library for C++.

You export your trained weights and biases to a .npz files, archived tensors and then load them for an inference.

Model export

  • Requirements: tensorflow and numpy packages
  • Tensorflow model training: make sure you update tf.GraphKeys.TRAINABLE_VARIABLES during the training
  • Tensorflow session saver: check how do you save your session, because you will need to restore it in order to export your model to .npy files. Currently my exporting script works with .meta files. But it doesn't matter if you know how to restore your session with graph
  • Run script from model-export folder and pass a path to your model as a parameter

Inference in C++

Create model

For C++ inference you have to manually initialize your layers with properties similar to your tensorflow model.
For example: for a convolution layer you need to specify

  1. Input shape
  2. Filter shape
  3. Filters count
  4. Padding type
  5. Activation function
  6. Layer name - must be the same as in Tensorflow

How fast does it work

Time required to prepare VGG-16 model and load weights and biases is around 1.3 seconds
18:22:53.062 info T#8648 create_layers - Loading layers...
18:22:54.382 info T#8648 read_image - Reading image

Time required to feed forward a test image from Tiny-ImageNet is around 1.1 seconds
18:55:43.429 info T#7995 main - Running inference...
18:55:44.571 info T#7995 main - Output ready

Example - VGG-16

As an example of usage I have chosen VGG-16 CNN model, which was trained on TinyImageNet.
More details about VGG model and Tiny-ImageNet could be found in this article: VGGNet and Tiny ImageNet

Model consists of 13 layers(10 layers of convolution and 3 dense layers), and takes 956 Mb of storage.
Saved in .npz files it takes 476 Mb, which is relatively small.

Submodules:

You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.