Skip to content

Files

Latest commit

 

History

History

byoc-nginx-python

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 

Build and Deploy an ML Application from scratch to SageMaker

We demonstrate building a ML inference application to predict the rings of Abalone.

Payload will be sent as a raw (untransformed) csv string to the inference application hosted as a serial-inference-pipeline on a real-time endpoint. The raw payload is first received by the featurizer container. The payload is then transformed by the featurizer, and the transformed record (float values) is returned as a csv string by the featurizer.

The transformed record is then passed to the predictor container (XGBoost model). The predictor then converts the transformed record into XGBMatrix format, loads the model and calls booster.predict(input_data) and returns the predictions (Rings) in a JSON format.

Note: We use a pretrained XGBoost model trained on Abalone Data Set.

The featurizer and predictor models are packaged as a serial inference pipeline and deployed to a Amazon SageMaker real-time endpoint.

Abalone XGBoost ML Application

BYO Featurizer (pre-processing) container

Step 1: Build featurizer model and container.

Refer to featurizer folder for full implementation

Featurizer

BYO XGBoost predictor container

Step 1: Build predictor model and container.

Refer to predictor folder for full implementation

Featurizer

References

  1. Abalone Data Set
  2. SageMaker local mode
  3. Host models along with pre-processing logic as serial inference pipeline behind one endpoint
  4. Run Real-time Predictions with an Inference Pipeline