- Create a virtual-env
virtualenv environment_name -p python3.6
(zappa doesn't support python3.7 yet) thensource environment_name/bin/activate
. pip install sklearn, numpy, flask, boto3, scipy, zappa
.zappa init
.- Take a look at
zappa_settings.json
. You app should reside in a directory calledapi
- Edit
zappa_settings.json
to have function name as:api.appname.app
(notappname.py
). - Ensure that the following key/value is in the
zappa_settings.json
. Edit if needed:
“slim_handler”: true
. <-- this is needed to upload large dependencies/files - Finally
zappa deploy dev
- Upload
scaling.pkl
andclassifier.pkl
files to your S3 bucket and edit code accordingly.
Misc
What does this do?
Simple classification model to check serverless deployment. Takes the age and salary and will predict if the person will buy a car or not. To be clear, this repo is not about doing ML but rather how to get your ML into a serverless environment and run predictions via a REST query.
I have two pickled files, one for the the StandardScaler and one for the Model.
POST
requests needs to be in JSON
format and should look like this the following:
{
"feature_array":[18,2450]
}
where the first value is the age and the 2nd is the salary.
There is no error correction or handling of bad requests. This is for demo purposes only of how to deploy a Machine Learning model and use AWS Lambda for it.
You can try sending a POST
request to the following endpoint that I have setup. See above for the JSON content to send in the POST
request. This endpoint is a simple ML model running in AWS Lambda and will predict whether a person will purchase a car based on their salary and age. Use Insomnia/Postman or CURL: