Skip to content

deploy mlflow model on kuberenetes cluster, and reference through http interfaces

Notifications You must be signed in to change notification settings

gufengxiaoyuehan/mlflow_kubernetes

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

mlflow-kubernetes

mlflow-kubernetes will publish mlflow models to on-premise kubernetes clusters automatically when there are new models registered or old models changed in mlflow, so others can use these models as SaaS. this models contains two related but independent sub-modules, server and client. one for listen events about models to create/update/delete pods in kubernetes, the other can used to access these models to get predicted value.

installation

you can install it from pypi, by default only client dependent package installed, if you want build an new server add [server] right after package name:

Caution!

you need forked mlflow access privilege when startup a listening server.

or install from source code:

pip install git+ssh://git@codeup.teambition.com/fusiontree/fusionplatform/mllfow-kubernetes#egg=mlflow-kubernetes

usage

this models contains server module to deploy models and client models to retrieve inference by invoking models already published.

configuration

configuration can expose by environment variables and through config module.

kubernetes

KUBERNETES_CONFIG_PATH :

config file provided access and authorization information to communicate with kubernetes. both server and client needs access kubernetes through webserver api, by defualt use system contained .

docker

DOCKER_REGISTRY_URI:

dockerhub or private registry fetch images from and pushing to. models download from mlflow will register to this repository and kubernetes will used it as image repository to build pod from.

message bus

MODELS_EVENT_URI:

event message bus server listen for. current support redis pubsub as a target uri. like redis://localhost:6379

server

make sure you install required server extras by

pip install git+ssh://git@codeup.teambition.com/fusiontree/...#egg=mlflow-kubernetes[server]

then start up a server, by default it will try to scribe a redis pubsub and assume local configuration can access the kubernetes cluster overwrite by environment variable or input parameters.

for example:

mlflowkube models server --model-events-target redis://host:port --docker-registry-target \
  --kubernetes-config-path ~/path/to/kubernetes/config

client

access models in mlflow-kubernetes is easy, just build a new ModelService:

from mlflow_kubernetes import ModelService
from mlflow_kubernetes import config
import pandas as pd
from sklearn import datasets
# set KUBE_AUTH_TOKEN in environment variable also works
config.KUBE_AUTH_TOKEN = '***kubernetes webapi access token***'
model_service = ModelService(model_name='iris-rf')
iris = datasets.load_iris()
iris_train = pd.DataFrame(iris.data, columns=iris.feature_names)
result = model_service.predict(iris_train)

About

deploy mlflow model on kuberenetes cluster, and reference through http interfaces

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages