- A Cortex cluster (installation instructions)
- The Cortex CLI (installation instructions)
Let's deploy a classifier built using the famous iris data set!
$ mkdir iris && cd iris && touch cortex.yaml
Cortex requires a cortex.yaml
file which defines a deployment
resource. An api
resource makes the model available as a live web service that can serve real-time predictions.
# cortex.yaml
- kind: deployment
name: iris
- kind: api
name: classifier
model: s3://cortex-examples/iris/tensorflow.zip
model_format: tensorflow
Cortex is able to read from any S3 bucket that you have access to.
$ cortex deploy
You can get a summary of the status of resources using cortex get
:
$ cortex get --watch
Get the API's endpoint:
$ cortex get classifier
Use cURL to test the API:
$ curl -k -X POST -H "Content-Type: application/json" \
-d '{ "samples": [ { "sepal_length": 5.2, "sepal_width": 3.6, "petal_length": 1.4, "petal_width": 0.3 } ] }' \
<API endpoint>
Delete the deployment:
$ cortex delete iris