Skip to content
TensorFlow examples in C, C++, Go and Python without bazel but with cmake and FindTensorFlow.cmake
Branch: master
Clone or download
Latest commit c50be3f Feb 26, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
.github/ISSUE_TEMPLATE Delete Jan 24, 2019
cmake/modules add support for tensorflow v1.12 Dec 10, 2018
custom_op fix 1.11 in FindTensorFlow.cmake add introduce "load_op" to remove re… Oct 5, 2018
dockerfiles cache workaround and add TF1.9 Oct 31, 2018
examples clang format Dec 11, 2018
inference more flexible tensorflow version handling Jan 24, 2019
serving change directory structure and update README Jan 24, 2019
.clang-format clang format Dec 11, 2018
LICENSE init Feb 7, 2018
setup.cfg update flake config and remove flake8 issues in the code (python3) Aug 17, 2018

TensorFlow CMake/C++ Collection

Looking at the official docs: What do you see? The usual fare? Now, guess what: This is a bazel-free zone. We use CMake here!

This collection contains reliable and dead-simple examples to use TensorFlow in C, C++, Go and Python: load a pre-trained model or compile a custom operation with or without CUDA. All builds are tested against the two most recent stable TensorFlow version and rely on CMake with a custom FindTensorFlow.cmake. This cmake includes common work arounds for bugs in specific TF versions.

The implementation is tested against the following versions

TensorFlow v1.9.0 TensorFlow v1.10.0 TensorFlow v1.11.0 TensorFlow v1.12.0
Build Status TensorFlow Build Status TensorFlow Build Status TensorFlow Build Status TensorFlow

It contains the following examples.

Example Explanation
custom operation build a custom operation for TensorFLow in C++/CUDA (requires only pip)
inference (C++) run inference in C++
inference (C) run inference in C
inference (Go) run inference in Go
event writer write event files for TensorBoard in C++
keras cpp-inference example run a Keras-model in C++
simple example create and run a TensorFlow graph in C++
resize image example resize an image in TensorFlow with/without OpenCV

Custom Operation

This example illustrates the process of creating a custom operation using C++/CUDA and CMake. It is not intended to show an implementation obtaining peak-performance. Instead, it is just a boilerplate-template.

user@host $ pip install tensorflow-gpu --user # solely the pip package is needed
user@host $ cd custom_op/user_ops
user@host $ cmake .
user@host $ make
user@host $ python
user@host $ cd ..
user@host $ python

TensorFlow Graph within C++

This example illustrates the process of loading an image (using OpenCV or TensorFlow), resizing the image saving the image as a JPG or PNG (using OpenCV or TensorFlow).

user@host $ cd examples/resize
user@host $ export TENSORFLOW_BUILD_DIR=...
user@host $ export TENSORFLOW_SOURCE_DIR=...
user@host $ cmake .
user@host $ make


There are two examples demonstrating the handling of TensorFlow-Serving: using a vector input and using an encoded image input.

server@host $ CHOOSE=basic # or image
server@host $ cd serving/${CHOOSE}/training
server@host $ python # create some model
server@host $ cd serving/server/
server@host $ ./ # start server

# some some queries

client@host $ cd client/bash
client@host $ ./
client@host $ cd client/python
# for the basic-example
client@host $ python
client@host $ python
# for the image-example
client@host $ python /path/to/img.[png,jpg]
client@host $ python /path/to/img.[png,jpg]


Create a model in Python, save the graph to disk and load it in C/C+/Go/Python to perform inference. As these examples are based on the TensorFlow C-API they require the library which is not shipped in the pip-package (tensorfow-gpu). Hence, you will need to build TensorFlow from source beforehand, e.g.,

user@host $ ls ${TENSORFLOW_SOURCE_DIR}

ACKNOWLEDGMENTS     bazel-genfiles      configure          pip         bazel-out        py.pynano
ANDROID_NDK_HOME    bazel-tensorflow
user@host $ cd ${TENSORFLOW_SOURCE_DIR}
user@host $  ./configure
user@host $  # ... or whatever options you used here
user@host $ bazel build -c opt --copt=-mfpmath=both --copt=-msse4.2 --config=cuda //
user@host $ bazel build -c opt --copt=-mfpmath=both --copt=-msse4.2 --config=cuda //

user@host $ export TENSORFLOW_BUILD_DIR=/tensorflow_dist
user@host $ mkdir ${TENSORFLOW_BUILD_DIR}
user@host $ cp ${TENSORFLOW_SOURCE_DIR}/bazel-bin/tensorflow/*.so ${TENSORFLOW_BUILD_DIR}/
user@host $ cp ${TENSORFLOW_SOURCE_DIR}/bazel-genfiles/tensorflow/cc/ops/*.h ${TENSORFLOW_BUILD_DIR}/includes/tensorflow/cc/ops/

1. Save Model

We just run a very basic model

x = tf.placeholder(tf.float32, shape=[1, 2], name='input')
output = tf.identity(tf.layers.dense(x, 1), name='output')

Therefore, save the model like you regularly do. This is done in besides some outputs

user@host $ python

[<tf.Variable 'dense/kernel:0' shape=(2, 1) dtype=float32_ref>, <tf.Variable 'dense/bias:0' shape=(1,) dtype=float32_ref>]
input            [[1. 1.]]
output           [[2.1909506]]
dense/kernel:0   [[0.9070684]
dense/bias:0     [0.]

2. Run Inference


user@host $ python python/

[<tf.Variable 'dense/kernel:0' shape=(2, 1) dtype=float32_ref>, <tf.Variable 'dense/bias:0' shape=(1,) dtype=float32_ref>]
input            [[1. 1.]]
output           [[2.1909506]]
dense/kernel:0   [[0.9070684]
dense/bias:0     [0.]


user@host $ cd cc
user@host $ cmake .
user@host $ make
user@host $ cd ..
user@host $ ./cc/inference_cc

input           Tensor<type: float shape: [1,2] values: [1 1]>
output          Tensor<type: float shape: [1,1] values: [2.19095063]>
dense/kernel:0  Tensor<type: float shape: [2,1] values: [0.907068372][1.28388226]>
dense/bias:0    Tensor<type: float shape: [1] values: 0>


user@host $ cd c
user@host $ cmake .
user@host $ make
user@host $ cd ..
user@host $ ./c/inference_c



user@host $ go get
user@host $ cd go
user@host $ ./
user@host $ cd ../
user@host $ ./inference_go

input           [[1 1]]
output          [[2.1909506]]
dense/kernel:0  [[0.9070684] [1.2838823]]
dense/bias:0    [0]
You can’t perform that action at this time.