Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
29 changes: 7 additions & 22 deletions pkgs/website/src/content/docs/concepts/context.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -217,34 +217,19 @@ interface ResolvedFlowWorkerConfig {
The most common use case is detecting when a message is on its final retry attempt:

```typescript "ctx.workerConfig" "ctx.rawMessage" ".read_ct" ".retry.limit"
// Queue handler - detect final retry attempt
async function processOrder(input, ctx) {
const isLastRetry = ctx.rawMessage.read_ct >= ctx.workerConfig.retry.limit;
async function sendEmail(input, ctx) {
const isLastAttempt = ctx.rawMessage.read_ct >= ctx.workerConfig.retry.limit;

if (isLastRetry) {
// Send alert or fallback action before final failure
await sendFailureAlert(input.orderId, ctx.workerConfig.queueName);
if (isLastAttempt) {
// Use fallback email service on final attempt
return await sendWithFallbackProvider(input.to, input.subject, input.body);
}

return processOrderLogic(input);
// Use primary email service for regular attempts
return await sendWithPrimaryProvider(input.to, input.subject, input.body);
}
```

```typescript "ctx.workerConfig" ".visibilityTimeout"
// Flow handler - access worker configuration
.step({ slug: 'api_call' }, async (input, ctx) => {
// Use worker's visibility timeout to set safe API timeout
const safeTimeout = (ctx.workerConfig.visibilityTimeout - 2) * 1000; // 2s buffer

console.log(`Using ${safeTimeout}ms timeout based on worker visibility timeout`);

const response = await fetch(input.url, {
signal: AbortSignal.timeout(safeTimeout)
});

return await response.json();
})
```

<Aside type="note">
The `workerConfig` object is deeply frozen to prevent handlers from modifying configuration values that could affect other message executions. It represents the final resolved configuration with all defaults applied, not the original user input. Since all defaults have been resolved, you can safely access any field without undefined checks.
Expand Down
18 changes: 9 additions & 9 deletions pkgs/website/src/content/docs/getting-started/update-pgflow.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,12 +13,12 @@ import { Aside, Steps } from "@astrojs/starlight/components";
This guide explains how to update pgflow to the latest version, including package updates and database migrations. pgflow updates involve two main components that need to be updated together.

<Aside type="caution" title="Unified Versioning">
pgflow uses changesets for version management, which means all packages are versioned together. When you update one pgflow package, all others should be updated to the same version to maintain compatibility.
All pgflow packages (npm and jsr) are versioned together. When you update one pgflow package, all others should be updated to the same version to maintain compatibility.
</Aside>

## What needs to be updated?

When updating pgflow, you need to update two types of packages:
When updating pgflow, you need to update:

1. **npm packages** - CLI, core libraries, client, and DSL packages
2. **JSR package** - The Edge Worker package (published to JSR registry)
Expand All @@ -31,19 +31,19 @@ When updating pgflow, you need to update two types of packages:
Update all pgflow npm packages to the latest version:

```bash frame="none"
npm update @pgflow/cli @pgflow/core @pgflow/client @pgflow/dsl @pgflow/example-flows
npm update @pgflow/client # or @pgflow/dsl, @pgflow/core etc.
```

Or if using yarn:

```bash frame="none"
yarn upgrade @pgflow/cli @pgflow/core @pgflow/client @pgflow/dsl @pgflow/example-flows
yarn upgrade @pgflow/client # or @pgflow/dsl, @pgflow/core etc.
```

Or if using pnpm:

```bash frame="none"
pnpm update @pgflow/cli @pgflow/core @pgflow/client @pgflow/dsl @pgflow/example-flows
pnpm update @pgflow/client # or @pgflow/dsl, @pgflow/core etc.
```

### 2. Update Edge Worker (JSR package)
Expand All @@ -52,10 +52,10 @@ Update the Edge Worker package in your JSR imports. In your Edge Function files,

```typescript
// Before
import { EdgeWorkerWithPgflow } from "jsr:@pgflow/edge-worker@^0.5.0";
import { EdgeWorker } from "jsr:@pgflow/edge-worker@^0.5.0";

// After (replace with latest version)
import { EdgeWorkerWithPgflow } from "jsr:@pgflow/edge-worker@^0.6.0";
import { EdgeWorker } from "jsr:@pgflow/edge-worker@^0.6.0";
```

### 3. Run pgflow install to update migrations
Expand Down Expand Up @@ -98,7 +98,7 @@ npx supabase migrations up
pgflow uses a simple but effective migration system to ensure migrations are applied correctly:

<Aside type="note" title="How Migration Prefixing Works">
pgflow uses a simple but effective trick with two timestamps: it keeps the original timestamp in the filename so it can search for already-installed migrations in your folder, while prefixing with a new timestamp that ensures the migration appears as the newest in your Supabase migrations folder. This prevents Supabase from complaining about applying migrations with timestamps older than already-applied ones.
pgflow uses a simple but effective trick with two timestamps: it keeps the original filename in the migration it creates in your project, so it can search for already-installed migrations, while prefixing with a new timestamp that ensures the migration appears as the newest in your Supabase migrations folder. This prevents Supabase from complaining about applying migrations with timestamps older than already-applied ones.

For example, a pgflow migration `20250429164909_pgflow_initial.sql` might be installed as `20250430000010_20250429164909_pgflow_initial.sql` in your project.
</Aside>
Expand Down Expand Up @@ -200,4 +200,4 @@ For production updates, follow a more careful approach:
Always test pgflow updates in a non-production environment first. While pgflow migrations are designed to be safe, any database schema change carries inherent risks in production systems.
</Aside>

Your pgflow installation is now updated with the latest features, bug fixes, and database schema improvements!
Your pgflow installation is now updated with the latest features, bug fixes, and database schema improvements!
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
---
draft: false
title: 'pgflow 0.6.1: Worker Configuration in Handler Context'
description: 'Handlers can now access worker configuration through context.workerConfig for intelligent decision-making based on retry limits, concurrency settings, and other worker parameters.'
date: 2025-09-05
authors:
- jumski
tags:
- release
- edge-worker
- context
- configuration
featured: true
cover:
alt: 'pgflow 0.6.1 worker config in handler context cover image'
image: '../../../assets/cover-images/pgflow-0-6-1-worker-config-in-handler-context.png'
---

import { Aside } from "@astrojs/starlight/components";

pgflow 0.6.1 adds `workerConfig` to the handler execution context, enabling intelligent decision-making based on worker configuration.

## Worker Configuration in Context

Handlers now have access to the complete worker configuration through `context.workerConfig` ([#200](https://github.com/pgflow-dev/pgflow/issues/200)). This enables smarter handlers that can adapt their behavior based on retry limits, concurrency settings, timeouts, and other worker parameters.

```typescript
async function sendEmail(input, context) {
const isLastAttempt = context.rawMessage.read_ct >= context.workerConfig.retry.limit;

if (isLastAttempt) {
// Use fallback email service on final attempt
return await sendWithFallbackProvider(input.to, input.subject, input.body);
}

// Use primary email service for regular attempts
return await sendWithPrimaryProvider(input.to, input.subject, input.body);
}
```

See the [context documentation](/concepts/context/#workerconfig) for complete details on available configuration properties and additional examples.


## Bug Fix

This release also fixes retry strategy validation to only enforce the 50-limit cap for exponential retry strategy, allowing higher limits for fixed strategy when needed ([#199](https://github.com/pgflow-dev/pgflow/issues/199)).

## Updating to 0.6.1

Follow our [update guide](/getting-started/update-pgflow/) for step-by-step upgrade instructions.