Skip to content

Latest commit

 

History

History
149 lines (98 loc) · 5.46 KB

README.md

File metadata and controls

149 lines (98 loc) · 5.46 KB

Prefect Demo

ci

Prefect 2 (aka Orion) examples running self-contained in a local kubernetes cluster. Batteries (mostly) included. 🔋

Getting started

Prerequisites:

  • make
  • node (required for pyright)
  • python >= 3.10
  • docker & docker compose
  • k3d (for creating a local kubernetes cluster)
  • kubectl
  • helm

To start:

Examples

Flows:

Deployments to Kubernetes created via:

Usage

Local

  1. make param-flow or make dask-flow or make ray-flow or make sub-flow
  2. make ui then navigate to http://localhost:4200/

The orion sqlite database is stored in ~/.prefect/prefect.db

Kubernetes

Create k3d cluster with an image registry, minio (for remote storage), the prefect agent and api

make kubes

Create deployments that run on kubernetes

make deploy

UI

Prefect UI: http://localhost:4200/

Minio UI: http://localhost:9001. User: minioadmin pass: minioadmin.

API

Prefect API: http://localhost:4200/api/

Docs

  • Blocks - an overview and look into the database tables for Blocks.
  • Deployment - an overview of the deployment process.

References

  • Tutorials from which some of the examples are taken

Cloud

To run flows with a cloud workspace set:

export PREFECT_API_URL=https://api.prefect.cloud/api/accounts/$accountid/workspaces/$workspaceid
export PREFECT_API_KEY=<your api key>

$accountid and $workspaceid are visible in the URL when you login to Prefect Cloud. The api key can be created from your user profile (bottom left).

Setting the environment variables is recommended. An alternative method is to login using:

prefect cloud login --key <your api key>

However be aware that this stores your api url and key as plain text ~/.prefect/profiles.toml.

Ray

Create a kubernetes ray cluster

make kubes-ray

Ray dashboard: http://localhost:8265

Known issues

Major

Minor

See all roadmap tagged issues for planned work.

Troubleshooting

Flows are late

Check the logs of the agent/worker:

make kubes-logs