With this project you can train your models using Tensorflow library on python, and export trained model to your C++ project to use it without including official TF library for C++.
You export your trained weights and biases to a
.npz files, archived tensors and then load them for an inference.
- Requirements: tensorflow and numpy packages
- Tensorflow model training: make sure you update tf.GraphKeys.TRAINABLE_VARIABLES during the training
- Tensorflow session saver: check how do you save your session, because you will need to restore it in order to export your model to
.npyfiles. Currently my exporting script works with
.metafiles. But it doesn't matter if you know how to restore your session with graph
- Run script from model-export folder and pass a path to your model as a parameter
Inference in C++
For C++ inference you have to manually initialize your layers with properties similar to your tensorflow model.
For example: for a convolution layer you need to specify
- Input shape
- Filter shape
- Filters count
- Padding type
- Activation function
- Layer name - must be the same as in Tensorflow
How fast does it work
Time required to prepare VGG-16 model and load weights and biases is around 1.3 seconds
18:22:53.062 info T#8648 create_layers - Loading layers...
18:22:54.382 info T#8648 read_image - Reading image
Time required to feed forward a test image from Tiny-ImageNet is around 1.1 seconds
18:55:43.429 info T#7995 main - Running inference...
18:55:44.571 info T#7995 main - Output ready
Example - VGG-16
As an example of usage I have chosen VGG-16 CNN model, which was trained on TinyImageNet.
More details about VGG model and Tiny-ImageNet could be found in this article: VGGNet and Tiny ImageNet
Model consists of 13 layers(10 layers of convolution and 3 dense layers), and takes 956 Mb of storage.
.npz files it takes 476 Mb, which is relatively small.