Skip to content

Commit

Permalink
feat: add ray ids to workflows, clean up types
Browse files Browse the repository at this point in the history
  • Loading branch information
MasterPtato committed May 9, 2024
1 parent b6b8a23 commit 02510dd
Show file tree
Hide file tree
Showing 22 changed files with 551 additions and 295 deletions.
38 changes: 18 additions & 20 deletions docs/libraries/workflow/GLOSSARY.md
Original file line number Diff line number Diff line change
@@ -1,44 +1,46 @@
TODO

# Glossary

## Worker

A process that's running workflows.
A process that queries for pending workflows with a specific filter. Filter is based on which workflows are registered in the given worker's registry.

## Registry

There are usually multiple workers running at the same time.
A collection of registered workflows.

## Workflow

A series of activies to be ran together.
A series of fallible executions of code (also known as activities), signal listeners, signal transmitters, or sub workflow triggers.

The code defining a workflow only specifies what activites to be ran. There is no complex logic (e.g. database queries) running within workflows.
Workflows can be though of as a list of tasks. The code defining a workflow only specifies what items should be ran; There is no complex logic (e.g. database queries) running within the top level of the workflow.

Workflow code can be reran multiple times to replay a workflow.
Upon an activity failure, workflow code can be reran without duplicate side effects because activities are cached and re-read after they succeed.

## Workflow State
## Activity

Persistated data about a workflow.
A block of code that can fail. This cannot trigger other workflows or activities, but it can call operations.

## Workflow Run
## Operation

An instance of a node running a workflow. If re-running a workflow, it will be replaying events.
A block of code. Can fail or not fail, used simply for tidiness.

## Workflow Event

An action that gets executed in a workflow. An event can be a:

- Activity
- Received signal
- Dispatched sub-workflow

Events store the output from activities and are used to ensure activites are ran only once.
Events store the output from activities and are used to ensure activities are ran only once.

## Workflow Event History

List of events that have executed in this workflow. These are used in replays to verify that the workflow has not changed to an invalid state.

## Workflow Replay

After the first run of a workflow, all runs will replay the activities and compare against the event history. If an activity has already been ran successfully, the activity will be skipped in the replay and use the output from the previous run.
After the first run of a workflow, subsequent runs will replay the activities and compare against the event history. If an activity has already been ran successfully, the activity will not actually run any code and instead use the output from the previous run.

## Workflow Wake Condition

Expand All @@ -47,10 +49,6 @@ If a workflow is not currently running an activity, wake conditions define when
The available conditions are:

- **Immediately** Run immediately by the first available node
- **Deadline** Run at a given timesetamp.

## Activity

A unit of code to run within a workflow.

Activities can fail and will be retried accoriding to the retry policy of the workflow.
- **Deadline** Run at a given timestamp.
- **Signal** Run once any one of the listed signals is received.
- **Sub workflow** Run once the given sub workflow is completed.
100 changes: 100 additions & 0 deletions docs/libraries/workflow/OVERVIEW.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
# Overview

Workflows are designed to provide highly durable code executions for distributed systems. The main goal is to allow for writing easy to understand multi-step programs with effective error handling, retryability, and a rigid state.

## Goals

**Primary**

- Performance
- Quick iteration speed
- Only depend on CockroachDB

**Secondary**

- Easy to monitor & manage via simple SQL queries
- Easier to understand than messages
- Rust-native
- Run in-process and as part of the binary to simplify architecture
- Leverage traits to reduce copies and needless ser/de
- Use native serde instead of Protobuf for simplicity (**this comes at the cost of verifiable backwards compatability with protobuf**)
- Lay foundations for OpenGB

## Use cases

- Billing cron jobs with batch
- Creating servers
- Email loops
- Creating dynamic servers
- What about dynamic server lifecycle? Is this more of an actor? This is blending between state and other stuff.
- Deploying CF workers

## Questions

- Concurrency
- Nondeterministic patches: https://docs.temporal.io/dev-guide/typescript/versioning#patching
- Do we plan to support side effects?

## Relation to existing Chirp primitives

### Messages

Workflows replace the usecase of messages for durable execution, which is almost all uses of messages.

The biggest pain point with messages is the lack of a rigid state. Message executions always match the following outline:

1. Read whatever data is required
2. perform some action(s)
3. update data as needed
4. finish (possibly publish more messages) OR upon failure, start all over at #1

The issue with this is that messages do not have any knowledge of messages that came before them, their own previous failed executions, or even other messages of the same system executing in parallel. Without thorough manually written sync checks and consistency validations (which are verbose and hard to follow), this type of execution often results in an overall broken state of whatever system the message is acting on (i.e. matchmaking, server provisioning).

**Once a broken state is reached, the retry system for messages _practically never_ successfully retries the message.**

### Cross-package hooks

We currently use messages for hooking in to events from other workflows so we don't have to bake in support directly.

This is potentially error prone since it makes control flow more opaque.

TBD on if we keep this pattern.

## Post-workflow message uses

Messages should still be used, but much less frequently. They're helpful for:

**Real-time Data Processing**

- When you have a continuous flow of data that needs to be processed in real-time or near-real-time.
- Examples include processing sensor data, social media feeds, financial market data, or clickstream data.
- Stream processing frameworks like Apache Kafka, Apache Flink, or Apache Spark Streaming are well-suited for handling high-volume, real-time data streams.

**Complex Event Processing (CEP)**

- When you need to detect and respond to patterns, correlations, or anomalies in real-time data streams.
- CEP involves analyzing and combining multiple event streams to identify meaningful patterns or trigger actions.
- Stream processing frameworks provide capabilities for defining and matching complex event patterns in real-time.

**Data Transformation and Enrichment**

- When you need to transform, enrich, or aggregate data as it arrives in real-time.
- This can involve tasks like data cleansing, normalization, joining with other data sources, or applying machine learning models.
- Stream processing allows you to process and transform data on-the-fly, enabling real-time analytics and insights.

**Continuous Data Integration**

- When you need to continuously integrate and process data from multiple sources in real-time.
- This can involve merging data streams, performing data synchronization, or updating downstream systems.
- Stream processing frameworks provide connectors and integrations with various data sources and sinks.

**Real-time Monitoring and Alerting**

- When you need to monitor data streams in real-time and trigger alerts or notifications based on predefined conditions.
- Stream processing allows you to define rules and thresholds to detect anomalies, errors, or critical events and send real-time alerts.

**High-throughput, Low-latency Processing**

- When you have a high volume of data that needs to be processed with low latency.
- Stream processing frameworks are designed to handle high-throughput data streams and provide low-latency processing capabilities.
- This is particularly useful in scenarios like fraud detection, real-time recommendations, or real-time bidding in advertising systems.
98 changes: 0 additions & 98 deletions docs/libraries/workflow/WORKFLOW.md

This file was deleted.

2 changes: 1 addition & 1 deletion lib/chirp-workflow/core/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ rivet-runtime = { path = "../../runtime" }
rivet-util = { path = "../../util/core" }
serde = { version = "1.0.198", features = ["derive"] }
serde_json = "1.0.116"
sqlx = { version = "0.7.4", features = ["runtime-tokio", "postgres", "uuid", "ipnetwork"] }
sqlx = { version = "0.7.4", features = ["runtime-tokio", "postgres", "uuid", "json", "ipnetwork"] }
thiserror = "1.0.59"
tokio = { version = "1.37.0", features = ["full"] }
tracing = "0.1.40"
Expand Down
Loading

0 comments on commit 02510dd

Please sign in to comment.