Skip to content
No description, website, or topics provided.
Jupyter Notebook Python Dockerfile
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.s2i
images
model
.gitignore
Dockerfile
EmotionModel.py
README.md
Seldon_Kubernetes.ipynb
client.py
docker-compose.yml
emotion_service_deployment.json
nGraph_ONNX_Example.ipynb
payload.json
requirements.txt

README.md

seldon-core-onnx

This repository shows how to serve an ONNX model with seldon-core. We are deploying a deep convolutional neural network for emotion recognition in faces in a local Kubernetes cluster. The ONNX model can be found in the onnx/models repository.

Blog Article

  • English:
  • German:

Application overview

Testing of the application We are deploying a model in a Kubernetes to perform emotion recogniton on a face.

Repository Overview

Installation

We need the following requirements:

Run seldon-core with Docker

Clone repository and CD into the folder.

docker build -t emotion_service:0.1 . && docker run -p 5000:5000 -it emotion_service:0.1 

Run the following script with Python:

from PIL import Image
import numpy as np
import requests
path_to_image = "images/smile.jpg"
image = Image.open(path_to_image).convert('L')
resized = image.resize((64, 64))
values = np.array(resized).reshape(1, 1, 64, 64)
req = requests.post("http://localhost:5000/predict", json={"data":{"ndarray": values.tolist()}})

Tutorial with Kubernetes

All commands to set up the model on the Kubernetes cluster can be found in the Seldon_Kubernetes.ipynb notebook.

Inference with nGraph

nGraph compiler Take a look into the nGraph compiler repository.

from ngraph_onnx.onnx_importer.importer import import_onnx_file
import ngraph as ng
# Import the ONNX file
model = import_onnx_file('model/model.onnx')

# Create an nGraph runtime environment
runtime = ng.runtime(backend_name='CPU')

# Select the first model and compile it to a callable function
emotion_cnn = runtime.computation(model)

Testing

minikube ip
kubectl get svc ambassador -o jsonpath='{.spec.ports[0].nodePort}'
curl -vX POST http://192.168.99.100:30809/seldon/default/seldon-emotion/api/v0.1/predictions -d @payload.json --header "Content-Type: application/json"
You can’t perform that action at this time.