Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 6 additions & 3 deletions docs/cluster-management/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,13 +19,16 @@ See [here](../miscellaneous/cli.md#install-cortex-cli-without-python-client) to
```bash
# clone the Cortex repository
git clone -b master https://github.com/cortexlabs/cortex.git

# navigate to the Pytorch text generator example
cd cortex/examples/pytorch/text-generator
```

### Using the CLI

```bash
# deploy the model as a realtime api
cortex deploy cortex/examples/pytorch/text-generator/cortex.yaml
cortex deploy

# view the status of the api
cortex get --watch
Expand All @@ -39,7 +42,7 @@ cortex get text-generator
# generate text
curl <API endpoint> \
-X POST -H "Content-Type: application/json" \
-d '{"text": "machine learning is"}' \
-d '{"text": "machine learning is"}'

# delete the api
cortex delete text-generator
Expand All @@ -54,7 +57,7 @@ import requests
local_client = cortex.client("local")

# deploy the model as a realtime api and wait for it to become active
deployments = local_client.deploy("cortex/examples/pytorch/text-generator/cortex.yaml", wait=True)
deployments = local_client.deploy("./cortex.yaml", wait=True)

# get the api's endpoint
url = deployments[0]["api"]["endpoint"]
Expand Down
9 changes: 6 additions & 3 deletions pkg/workloads/cortex/client/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,9 @@ You must have [Docker](https://docs.docker.com/install) installed to run Cortex
```bash
# clone the Cortex repository
git clone -b master https://github.com/cortexlabs/cortex.git

# navigate to the Pytorch text generator example
cd cortex/examples/pytorch/text-generator
```

### In Python
Expand All @@ -48,7 +51,7 @@ import requests
local_client = cortex.client("local")

# deploy the model as a realtime api and wait for it to become active
deployments = local_client.deploy("cortex/examples/pytorch/text-generator/cortex.yaml", wait=True)
deployments = local_client.deploy("./cortex.yaml", wait=True)

# get the api's endpoint
url = deployments[0]["api"]["endpoint"]
Expand All @@ -63,7 +66,7 @@ local_client.delete_api("text-generator")
### Using the CLI
```bash
# deploy the model as a realtime api
cortex deploy cortex/examples/pytorch/text-generator/cortex.yaml
cortex deploy

# view the status of the api
cortex get --watch
Expand All @@ -77,7 +80,7 @@ cortex get text-generator
# generate text
curl <API endpoint> \
-X POST -H "Content-Type: application/json" \
-d '{"text": "machine learning is"}' \
-d '{"text": "machine learning is"}'

# delete the api
cortex delete text-generator
Expand Down
1 change: 1 addition & 0 deletions pkg/workloads/cortex/client/cortex/binary/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,7 @@ def run_cli(
if not hide_output:
if (not mixed_output) or (mixed_output and not result_found):
sys.stdout.write(c)
sys.stdout.flush()

process.wait()

Expand Down