Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
108 changes: 57 additions & 51 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,10 @@
# Deploy machine learning models in production

Cortex is an open source platform for deploying machine learning models as production web services.
# Machine learning model serving infrastructure

<br>

<!-- Delete on release branches -->
<!-- CORTEX_VERSION_README_MINOR -->
[install](https://cortex.dev/install) • [tutorial](https://cortex.dev/iris-classifier) • [docs](https://cortex.dev) • [examples](https://github.com/cortexlabs/cortex/tree/0.15/examples) • [we're hiring](https://angel.co/cortex-labs-inc/jobs) • [email us](mailto:hello@cortex.dev) • [chat with us](https://gitter.im/cortexlabs/cortex)<br><br>
[install](https://cortex.dev/install) • [docs](https://cortex.dev) • [examples](https://github.com/cortexlabs/cortex/tree/0.15/examples) • [we're hiring](https://angel.co/cortex-labs-inc/jobs) • [chat with us](https://gitter.im/cortexlabs/cortex)<br><br>

<!-- Set header Cache-Control=no-cache on the S3 object metadata (see https://help.github.com/en/articles/about-anonymized-image-urls) -->
![Demo](https://d1zqebknpdh033.cloudfront.net/demo/gif/v0.13_2.gif)
Expand All @@ -25,43 +23,15 @@ Cortex is an open source platform for deploying machine learning models as produ

<br>

## Spinning up a cluster
## Deploying a model

Cortex is designed to be self-hosted on any AWS account. You can spin up a cluster with a single command:
### Install the CLI

<!-- CORTEX_VERSION_README_MINOR -->
```bash
# install the CLI on your machine
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.15/get-cli.sh)"

# provision infrastructure on AWS and spin up a cluster
$ cortex cluster up

aws region: us-west-2
aws instance type: g4dn.xlarge
spot instances: yes
min instances: 0
max instances: 5

aws resource cost per hour
1 eks cluster $0.10
0 - 5 g4dn.xlarge instances for your apis $0.1578 - $0.526 each (varies based on spot price)
0 - 5 50gb ebs volumes for your apis $0.007 each
1 t3.medium instance for the operator $0.0416
1 20gb ebs volume for the operator $0.003
2 network load balancers $0.0225 each

your cluster will cost $0.19 - $2.85 per hour based on cluster size and spot instance pricing/availability

○ spinning up your cluster ...

your cluster is ready!
```

<br>

## Deploying a model

### Implement your predictor

```python
Expand All @@ -84,35 +54,75 @@ class PythonPredictor:
predictor:
type: python
path: predictor.py
tracker:
model_type: classification
compute:
gpu: 1
mem: 4G
```

### Deploy to AWS
### Deploy your model

```bash
$ cortex deploy

creating sentiment-classifier
```

### Serve real-time predictions
### Serve predictions

```bash
$ curl http://localhost:8888 \
-X POST -H "Content-Type: application/json" \
-d '{"text": "serving models locally is cool!"}'

positive
```

<br>

## Deploying models at scale

### Spin up a cluster

Cortex clusters are designed to be self-hosted on any AWS account (GCP support is coming soon):

```bash
$ cortex cluster up

aws region: us-west-2
aws instance type: g4dn.xlarge
spot instances: yes
min instances: 0
max instances: 5

your cluster will cost $0.19 - $2.85 per hour based on cluster size and spot instance pricing/availability

○ spinning up your cluster ...

your cluster is ready!
```

### Deploy to your cluster with the same code and configuration

```bash
$ cortex deploy --env aws

creating sentiment-classifier
```

### Serve predictions at scale

```bash
$ curl http://***.amazonaws.com/sentiment-classifier \
-X POST -H "Content-Type: application/json" \
-d '{"text": "the movie was amazing!"}'
-d '{"text": "serving models at scale is really cool!"}'

positive
```

### Monitor your deployment

```bash
$ cortex get sentiment-classifier --watch
$ cortex get sentiment-classifier

status up-to-date requested last update avg request 2XX
live 1 1 8s 24ms 12
Expand All @@ -122,27 +132,23 @@ positive 8
negative 4
```

<br>
### How it works

## What is Cortex similar to?
The CLI sends configuration and code to the cluster every time you run `cortex deploy`. Each model is loaded into a Docker container, along with any Python packages and request handling code. The model is exposed as a web service using a Network Load Balancer (NLB) and FastAPI / TensorFlow Serving / ONNX Runtime (depending on the model type). The containers are orchestrated on Elastic Kubernetes Service (EKS) while logs and metrics are streamed to CloudWatch.

Cortex is an open source alternative to serving models with SageMaker or building your own model deployment platform on top of AWS services like Elastic Kubernetes Service (EKS), Elastic Container Service (ECS), Lambda, Fargate, and Elastic Compute Cloud (EC2) and open source projects like Docker, Kubernetes, and TensorFlow Serving.
Cortex manages its own Kubernetes cluster so that end-to-end functionality like request-based autoscaling, GPU support, and spot instance management can work out of the box without any additional DevOps work.

<br>

## How does Cortex work?

The CLI sends configuration and code to the cluster every time you run `cortex deploy`. Each model is loaded into a Docker container, along with any Python packages and request handling code. The model is exposed as a web service using Elastic Load Balancing (ELB), TensorFlow Serving, and ONNX Runtime. The containers are orchestrated on Elastic Kubernetes Service (EKS) while logs and metrics are streamed to CloudWatch.
## What is Cortex similar to?

Cortex manages its own Kubernetes cluster so that end-to-end functionality like request-based autoscaling, GPU support, and spot instance management can work out of the box without any additional DevOps work.
Cortex is an open source alternative to serving models with SageMaker or building your own model deployment platform on top of AWS services like Elastic Kubernetes Service (EKS), Lambda, or Fargate and open source projects like Docker, Kubernetes, TensorFlow Serving, and TorchServe.

<br>

## Examples of Cortex deployments
## Examples

<!-- CORTEX_VERSION_README_MINOR x5 -->
* [Sentiment analysis](https://github.com/cortexlabs/cortex/tree/0.15/examples/tensorflow/sentiment-analyzer): deploy a BERT model for sentiment analysis.
<!-- CORTEX_VERSION_README_MINOR x3 -->
* [Image classification](https://github.com/cortexlabs/cortex/tree/0.15/examples/tensorflow/image-classifier): deploy an Inception model to classify images.
* [Search completion](https://github.com/cortexlabs/cortex/tree/0.15/examples/pytorch/search-completer): deploy Facebook's RoBERTa model to complete search terms.
* [Text generation](https://github.com/cortexlabs/cortex/tree/0.15/examples/pytorch/text-generator): deploy Hugging Face's DistilGPT2 model to generate text.
* [Iris classification](https://github.com/cortexlabs/cortex/tree/0.15/examples/sklearn/iris-classifier): deploy a scikit-learn model to classify iris flowers.
2 changes: 1 addition & 1 deletion cli/cmd/root.go
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ func initTelemetry() {
var _rootCmd = &cobra.Command{
Use: "cortex",
Aliases: []string{"cx"},
Short: "deploy machine learning models in production",
Short: "machine learning model serving infrastructure",
}

func Execute() {
Expand Down
46 changes: 19 additions & 27 deletions docs/cluster-management/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,41 +2,37 @@

_WARNING: you are on the master branch, please refer to the docs on the branch that matches your `cortex version`_

## Prerequisites
## Running on your machine or a single instance

1. [Docker](https://docs.docker.com/install)
2. [AWS credentials](aws-credentials.md)
[Docker](https://docs.docker.com/install) is required to run Cortex locally. In addition, your machine (or your Docker Desktop for Mac users) should have at least 8GB of memory if you plan to deploy large deep learning models.

## Spin up a cluster

See [cluster configuration](config.md) to learn how you can customize your cluster with `cluster.yaml` and see [EC2 instances](ec2-instances.md) for an overview of several EC2 instance types. To use GPU nodes, you may need to subscribe to the [EKS-optimized AMI with GPU Support](https://aws.amazon.com/marketplace/pp/B07GRHFXGM) and [file an AWS support ticket](https://console.aws.amazon.com/support/cases#/create?issueType=service-limit-increase&limitType=ec2-instances) to increase the limit for your desired instance type.
### Install the CLI

<!-- CORTEX_VERSION_MINOR -->
```bash
# install the CLI on your machine
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/master/get-cli.sh)"
```

# provision infrastructure on AWS and spin up a cluster
$ cortex cluster up

aws resource cost per hour
1 eks cluster $0.10
0 - 5 g4dn.xlarge instances for your apis $0.1578 - $0.526 each (varies based on spot price)
0 - 5 50gb ebs volumes for your apis $0.007 each
1 t3.medium instance for the operator $0.0416
1 20gb ebs volume for the operator $0.003
2 network load balancers $0.0225 each
## Running at scale on AWS

your cluster will cost $0.19 - $2.85 per hour based on cluster size and spot instance pricing/availability
[Docker](https://docs.docker.com/install) and valid [AWS credentials](aws-credentials.md) are required to run a Cortex cluster on AWS.

○ spinning up your cluster ...
### Spin up a cluster

your cluster is ready!
```
See [cluster configuration](config.md) to learn how you can customize your cluster with `cluster.yaml` and see [EC2 instances](ec2-instances.md) for an overview of several EC2 instance types.

## Deploy a model
To use GPU nodes, you may need to subscribe to the [EKS-optimized AMI with GPU Support](https://aws.amazon.com/marketplace/pp/B07GRHFXGM) and [file an AWS support ticket](https://console.aws.amazon.com/support/cases#/create?issueType=service-limit-increase&limitType=ec2-instances) to increase the limit for your desired instance type.

<!-- CORTEX_VERSION_MINOR -->
```bash
# install the CLI on your machine
$ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/master/get-cli.sh)"

# provision infrastructure on AWS and spin up a cluster
$ cortex cluster up
```

## Deploy an example

```bash
# clone the Cortex repository
Expand All @@ -45,7 +41,7 @@ git clone -b master https://github.com/cortexlabs/cortex.git
# navigate to the TensorFlow iris classification example
cd cortex/examples/tensorflow/iris-classifier

# deploy the model to the cluster
# deploy the model
cortex deploy

# view the status of the api
Expand All @@ -61,11 +57,7 @@ cortex get iris-classifier
curl -X POST -H "Content-Type: application/json" \
-d '{ "sepal_length": 5.2, "sepal_width": 3.6, "petal_length": 1.4, "petal_width": 0.3 }' \
<API endpoint>
```

## Cleanup

```bash
# delete the api
cortex delete iris-classifier
```
Expand Down
2 changes: 1 addition & 1 deletion docs/summary.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Table of contents

* [Deploy machine learning models in production](../README.md)
* [Machine learning model serving infrastructure](../README.md)
* [Install](cluster-management/install.md)
* [Tutorial](../examples/sklearn/iris-classifier/README.md)
* [GitHub](https://github.com/cortexlabs/cortex)
Expand Down
2 changes: 1 addition & 1 deletion examples/pytorch/language-identifier/sample.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
"text": "deploy machine learning models in production"
"text": "machine learning model serving infrastructure"
}
Loading