Skip to content

JasonSCFu/Deploy-ML-Application-using-OpenShift

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deploy-ML-Application-using-OpenShift

A step by step hands-on tutorial for end-to-end ML model deployment process

We can use OpenShift Sandbox to run this tutorial

Step 1: Create a simple NLP model using 01-Create-Claims-Classification.ipynb

Step 2: Exposing the model as an API

In the previous section, we learned how to create the code that classifies a repair based on the free text we enter. But we can't use a notebook directly like this in a production environment. So now we will learn how to package this code as an API that you can directly query from other applications.

Some explanations first:

  • The code that we wrote in the notebook has been repackaged as a single Python file, prediction.py. Basically, the file combines the code in all the cells of the notebook.

  • To use this code as a function you can call, we added a function called predict that takes a string as an input, classifies the repair, and sends back the resulting classification. Open the file directly in JupyterLab, and you should recognize our previous code along with this new additional function.

  • There are other files in the folder that provide functions to launch a web server, and that we will use to serve our API.

Step 3: Test the Flask application

Launch the Flask API server by running 03_MBR_run_application.ipynb file.

Test the Flask API by running 04_MBR_test_application.ipynb.

Our API will be served directly from our container using Flask, a popular Python web server. The Flask application, which will call our prediction function, is defined in the wsgi.py file.

Step 4: Building the application inside OpenShift

By now the application code is working, We'are ready to package it as a container image and run it directly in OpenShift as a service that we will be able to call from any other application. Follow the steps to setup in OpenShift. The OpenShift Dedicated dashboard can be accessed from the application switcher in the top bar of the RHODS dashboard.

test

test

test

test

test

test

test

Step 5: Testing the application

Copying and pasting the route's link into your browser.

5.1: cURL from a terminal session:

We can use the OpenShift Web Terminal to access service from a command line. In the terminal shell, enter a cURL command with sample text like, I turn the key and nothing happens. Replace the localhost in the command with the right hostname for the route, and make sure to include /prediction:

test test

5.2: From Python code:

Send a RESTful post request with sample text like, I turn the key and nothing happens. Replace the localhost in the command with the right hostname for the route, and make sure to include /prediction:

test

5.3: From a notebook:

We can also test the REST API endpoint from a Jupyter Notebook. Open the notebook named 05_MBR_enter_repair.ipynb. In the first cell, replace the placeholders with the text. The repair text goes in the my_text field in the file, and the route in the my_route field, as follows:

test

Run both cells and see the result.

test

About

ML Ops | Flask API | OpenShift | Model deployment

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors