Skip to content

A Kubernetes-powered web application with built-in triggers to OpenFaaS machine learning models.

Notifications You must be signed in to change notification settings

ryanrashid/faasml

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Serverless Machine Learning UI

This is a web application with built-in triggers to machine learning models powered by OpenFaaS. The aim of this project is to demonstrate how machine learning computations can be executed using an on-premise serverless approach, instead of the traditional server model. Because the solution must be on-premise, cloud-based approaches are not considered; rather, this exploration seeks to use an in-house server infrastructure but with the same functionality as a cloud-native model. Namely, it must be able to:

  1. Autoscale workloads up and down from zero

  2. Allocate resources efficiently between different processes

For these reasons, the application is deployed on top of Kubernetes, which orchestrates these services automatically so developers can avoid dealing with underlying server infrastructures like routing, monitoring, scaling, etc.

Note that this UI does not come automatically installed with the ML models since they are outsourced and deployed seperately with OpenFaaS to demonstrate a serverless architecture (see the Deploying Machine Learning section for more details).

diagrams

Setup

Prerequisites

Deploying the UI

  1. Clone this repository and CD into the folder.

  2. Build the docker image from the Dockerfile.

  $ docker build -t faasml:test .
  1. Create a Kubernetes deployment using the newly built Docker image.
  $ kubectl create deployment faasml --image=faasml:test

Alternatively, you can avoid having to locally build the image and use the pre-built image from Docker Hub.

  $ kubectl create deployment faasml --image="docker.io/ryanrashid/faasml:v3"
  1. Create a Kubernetes service from the deployment.
  $ kubectl expose deployment/faasml --type=LoadBalancer --port=5000
  1. Access the URL for the service.

Using Minikube

  $ minikube service faasml --url

Using kubectl

  $ kubectl describe service faasml

Now, the basic UI is setup and should look as follows.

Screen Shot 2020-08-01 at 6 37 28 PM

Note: Since the UI only holds the triggers and not the models themselves, the next step is to setup the ML models on top of OpenFaaS and Kubernetes for the triggers to properly work.

Deploying Machine Learning

Since the aim of this project is to explore serverless architectures through function as a service, the machine learning computation is not hosted on the formal application itself, but rather it is 'outsourced' to the Kubernetes cluster. The following is a useful diagram to help illustrate the process:

architecture

Once again, this setup assumes that OpenFaaS has been properly installed and configured onto a Kubernetes cluster. If you need help setting up OpenFaas, see the More Resources section for a comprehensive tutorial on configuring OpenFaaS.

Repeat the following process for the four machine learning models ('Inception', 'Face blur by Endre Simo', 'Line Drawing Generator from a photograph', and 'Colorization'):

  1. Access the OpenFaaS UI on port 8080, and click 'Deploy New Function'.

deploy

  1. Find the model you want to deploy, and click 'Deploy' in the bottom right.

store

  1. You should now see the function displayed in a list on the left.

list

Once all the models have been deployed, the UI triggers should properly call on these functions.

Inception


Pigo


Colorize


More Resources

About

A Kubernetes-powered web application with built-in triggers to OpenFaaS machine learning models.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published