Skip to content
Branch: master
Find file History
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.

Seldon base image with python and OpenVINO inference engine


Seldon prediction base component with OpenVINO toolkit makes it easy to implement inference operation with performance boost.

OpenVINO inference engine together with model optimizer makes it possible to achieve faster execution.

Use model optimizer to convert trained models from frameworks like TensorFlow, MXNET, Caffe, Kaldi or ONNX to Intermediate Representation format.

It can be used more efficiently to execute inference operations using inference engine.

It will take advantage of all the CPU features to reduce the inference latency and gain extra throughput.

Current version of OpenVINO supports also low precision models, which improve the performance even more. At the same time the accuracy impact is minimal.


make build


This base image can be used to Seldon components exactly the same way like with standard Seldon base images. Use s2i tool like documented here. An example is presented below:

s2i build . seldonio/seldon-core-s2i-openvino:0.2 {target_component_image_name}


Models ensemble with OpenVINO


OpenVINO toolkit

OpenVINO API docs

Seldon pipeline example


This Seldon base image contains, beside OpenVINO inference execution engine python API also several other useful components.

  • Intel optimized python version
  • Intel optimized OpenCV package
  • Intel optimized TensorFlow with MKL engine
  • Configured conda package manager

OpenVINO and TensorFlow in this docker image, employs MKL-DNN library with OpenMP threading control. Make sure you configure optimal values for MKL related environment variables in the containers. Recommendations are listed below:



OMP_NUM_THREADS={number of physical CPU cores to allocate}

You can’t perform that action at this time.