Skip to content

feat(cli): setup live-queries#1208

Merged
Tobbe merged 17 commits intomainfrom
tobbe-reat-live-query-setup
Feb 20, 2026
Merged

feat(cli): setup live-queries#1208
Tobbe merged 17 commits intomainfrom
tobbe-reat-live-query-setup

Conversation

@Tobbe
Copy link
Member

@Tobbe Tobbe commented Feb 16, 2026

Adds CLI command yarn cedar setup live-queries to configure Postgres LISTEN/NOTIFY-based live query invalidation. Creates database triggers that send notifications on table changes, and a Node.js listener that invalidates live query cache keys.

@netlify
Copy link

netlify bot commented Feb 16, 2026

Deploy Preview for cedarjs canceled.

Name Link
🔨 Latest commit 8ef80a5
🔍 Latest deploy log https://app.netlify.com/projects/cedarjs/deploys/6997d067938012000717bbda

@github-actions github-actions bot added this to the next-release milestone Feb 16, 2026
@nx-cloud
Copy link

nx-cloud bot commented Feb 16, 2026

🤖 Nx Cloud AI Fix

Ensure the fix-ci command is configured to always run in your CI pipeline to get automatic fixes in future runs. For more information, please see https://nx.dev/ci/features/self-healing-ci


View your CI Pipeline Execution ↗ for commit 8ef80a5

Command Status Duration Result
nx run-many -t build:pack --exclude create-ceda... ✅ Succeeded 2s View ↗
nx run-many -t build ✅ Succeeded 4s View ↗
nx run-many -t test --minWorkers=1 --maxWorkers=4 ✅ Succeeded 1m 34s View ↗
nx run-many -t test:types ✅ Succeeded 9s View ↗

☁️ Nx Cloud last updated this comment at 2026-02-20 03:27:38 UTC

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 16, 2026

Greptile Summary

Adds CLI command yarn cedar setup live-queries to configure Postgres LISTEN/NOTIFY-based live query invalidation. Creates database triggers that send notifications on table changes, and a Node.js listener that invalidates live query cache keys.

Key improvements in templates:

  • Migration template sends only recordId instead of full row data to avoid 8KB payload limit
  • Listener template uses connectionGeneration pattern to prevent reconnect loops
  • Event trigger automatically attaches to new tables created after migration

Issues found:

  • Test project files (test-project/api/src/lib/liveQueriesListener.ts and test-project/api/db/migrations/20260215112230_live_queries_notifications/migration.sql) don't match the improved templates
  • Test migration sends full row_to_json(NEW/OLD) which can hit payload limits
  • Test listener lacks reconnect loop prevention and expects record object instead of recordId
  • Test migration missing event trigger for future tables

Confidence Score: 3/5

  • Safe to merge but test-project files need updates to match improved templates
  • The CLI templates are well-designed with fixes for known issues (reconnect loops, payload limits, future table support). However, the test-project implementation files still have the old versions with these issues, creating an inconsistency that will confuse users and doesn't properly demonstrate the feature
  • Update test-project/api/src/lib/liveQueriesListener.ts and test-project/api/db/migrations/20260215112230_live_queries_notifications/migration.sql to match the improved templates

Important Files Changed

Filename Overview
packages/cli/src/commands/setup/live-queries/liveQueriesHandler.js CLI handler for setting up live queries - creates migration, listener file, and wires into GraphQL handler. Well-structured with proper validation and error handling.
packages/cli/src/commands/setup/live-queries/templates/liveQueriesListener.ts.template Template for Postgres listener with connectionGeneration pattern to prevent reconnect loops. Sends only recordId for efficient notifications.
packages/cli/src/commands/setup/live-queries/templates/migration.sql.template Improved template sends only recordId instead of full row, includes event trigger to automatically attach triggers to new tables.
test-project/api/src/lib/liveQueriesListener.ts Test project listener has reconnect loop issue - lacks connectionGeneration pattern and expects full record object from notifications.
test-project/api/db/migrations/20260215112230_live_queries_notifications/migration.sql Test project migration still uses row_to_json with full record data (can exceed 8000 byte limit), and missing event trigger for new tables.

Sequence Diagram

sequenceDiagram
    participant App as GraphQL Server
    participant Listener as liveQueriesListener
    participant PG as PostgreSQL
    participant Cache as liveQueryStore
    participant Client as GraphQL Client

    App->>Listener: startLiveQueryListener()
    Listener->>PG: LISTEN table_change
    
    Note over Client,App: Client makes @live query
    Client->>App: query @live { posts }
    App->>Cache: Store query result with key
    App-->>Client: Return data + invalidation key
    
    Note over PG: User mutates data
    App->>PG: INSERT/UPDATE/DELETE on table
    PG->>PG: Trigger fires cedar_notify_table_change()
    PG->>PG: pg_notify('table_change', json_build_object(...))
    
    PG-->>Listener: NOTIFY with table + recordId
    Listener->>Listener: Parse notification payload
    Listener->>Listener: Generate invalidation keys
    Listener->>Cache: invalidate(['Query.posts', 'Post:123'])
    Cache-->>Client: SSE update with new data
Loading

Last reviewed commit: 90ca6fb

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

10 files reviewed, 3 comments

Edit Code Review Agent Settings | Greptile

Comment on lines +79 to +104
async function connect() {
try {
if (client) {
await client.end()
}

client = new Client({
connectionString: process.env.DATABASE_URL,
})

client.on('notification', async (msg) => {
await onNotification(msg.payload)
})

client.on('error', (error) => {
logger.error(
{ error },
'Postgres live query listener encountered an error'
)
reconnect()
})

client.on('end', () => {
logger.warn('Postgres live query listener disconnected')
reconnect()
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reconnect loop from intentional client.end()

When connect() is called while a previous client exists, client.end() on line 82 fires the end event on the old client (line 101-104), which calls reconnect(), scheduling another connect() in 5 seconds. The current connect() continues and successfully establishes a new connection — but after 5 seconds the spurious reconnect fires, tears down the new connection, and the cycle repeats.

To fix this, either remove the event listeners from the old client before ending it, or set a flag to suppress reconnection during an intentional reconnect:

Suggested change
async function connect() {
try {
if (client) {
await client.end()
}
client = new Client({
connectionString: process.env.DATABASE_URL,
})
client.on('notification', async (msg) => {
await onNotification(msg.payload)
})
client.on('error', (error) => {
logger.error(
{ error },
'Postgres live query listener encountered an error'
)
reconnect()
})
client.on('end', () => {
logger.warn('Postgres live query listener disconnected')
reconnect()
})
async function connect() {
try {
if (client) {
client.removeAllListeners()
await client.end().catch(() => {})
}
client = new Client({
connectionString: process.env.DATABASE_URL,
})
client.on('notification', async (msg) => {
await onNotification(msg.payload)
})
client.on('error', (error) => {
logger.error(
{ error },
'Postgres live query listener encountered an error'
)
reconnect()
})
client.on('end', () => {
logger.warn('Postgres live query listener disconnected')
reconnect()
})

Comment on lines +11 to +19
PERFORM pg_notify(
'table_change',
json_build_object(
'schema', TG_TABLE_SCHEMA,
'table', TG_TABLE_NAME,
'operation', TG_OP,
'record', record_data
)::text
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pg_notify payload can exceed 8000-byte limit

PostgreSQL's NOTIFY payload is limited to ~8000 bytes. row_to_json(NEW) / row_to_json(OLD) serializes the entire row — for tables with large text columns, many columns, or binary data this will exceed the limit. When that happens, pg_notify raises an exception that rolls back the entire transaction, causing the original INSERT/UPDATE/DELETE to fail.

Consider truncating the payload or only sending the fields needed for invalidation (table name, operation, and primary key):

Suggested change
PERFORM pg_notify(
'table_change',
json_build_object(
'schema', TG_TABLE_SCHEMA,
'table', TG_TABLE_NAME,
'operation', TG_OP,
'record', record_data
)::text
);
PERFORM pg_notify(
'table_change',
left(
json_build_object(
'schema', TG_TABLE_SCHEMA,
'table', TG_TABLE_NAME,
'operation', TG_OP,
'record', record_data
)::text,
7999
)
);

Alternatively, you could only include the id field (or the primary key) in record_data instead of the full row, which would keep payloads small and avoid the truncation issue entirely.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@greptileai How can I make it only include the primary key of the record?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's how to send only the primary key instead of the full row:

Suggested change
PERFORM pg_notify(
'table_change',
json_build_object(
'schema', TG_TABLE_SCHEMA,
'table', TG_TABLE_NAME,
'operation', TG_OP,
'record', record_data
)::text
);
IF (TG_OP = 'DELETE') THEN
record_data = json_build_object('id', OLD.id);
ELSE
record_data = json_build_object('id', NEW.id);
END IF;

This assumes your primary key column is named id. If your tables use a different primary key column name, replace id with that column name. For composite primary keys, you'd include multiple fields in the json_build_object call (e.g., 'user_id', NEW.user_id, 'order_id', NEW.order_id).

END;
$$ LANGUAGE plpgsql;

SELECT cedar_attach_notify_triggers();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Triggers only attached for tables existing at migration time

cedar_attach_notify_triggers() is invoked once during migration. Any tables created after this migration runs (e.g., from future Prisma migrations) will not have the notify trigger. Users would need to manually re-run SELECT cedar_attach_notify_triggers(); or re-run the migration.

Consider calling out in the docs / final CLI message that the function needs to be re-invoked after schema changes, or adding a note about this limitation.

@Tobbe
Copy link
Member Author

Tobbe commented Feb 16, 2026

@greptileai please do a full review and update your existing review summary

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

21 files reviewed, 5 comments

Edit Code Review Agent Settings | Greptile

Comment on lines +6 to +8
record_data = row_to_json(OLD);
ELSE
record_data = row_to_json(NEW);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test project migration uses row_to_json(OLD/NEW) instead of just the id field like the template does. This is inconsistent with the improved template at packages/cli/src/commands/setup/live-queries/templates/migration.sql.template which sends only recordId.

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

END;
$$ LANGUAGE plpgsql;

SELECT cedar_attach_notify_triggers();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test project migration is missing the event trigger (cedar_event_on_table_create) that automatically attaches triggers to future tables. The template includes this at line 55-98.

Comment on lines +79 to +112
async function connect() {
try {
if (client) {
await client.end()
}

client = new Client({
connectionString: process.env.DATABASE_URL,
})

client.on('notification', async (msg) => {
await onNotification(msg.payload)
})

client.on('error', (error) => {
logger.error(
{ error },
'Postgres live query listener encountered an error'
)
reconnect()
})

client.on('end', () => {
logger.warn('Postgres live query listener disconnected')
reconnect()
})

await client.connect()
await client.query(`LISTEN ${LIVE_QUERY_CHANNEL}`)

logger.info('Postgres live query listener connected')
} catch (error) {
logger.error({ error }, 'Failed to connect Postgres live query listener')
reconnect()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test project listener lacks the connectionGeneration pattern that prevents reconnect loops. When client.end() is called on line 82, it triggers the end event handler (line 101-104) which calls reconnect(), creating a loop. The template fixes this with a generation counter (see template lines 75-127).

Comment on lines +14 to +16
interface GetKeysToInvalidateArgs {
table: string
record: unknown
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interface expects record: unknown but the improved migration template sends recordId: string. This mismatch means the listener won't work correctly with the improved migration template.

Comment on lines +19 to +26
function getKeysToInvalidate({ table, record }: GetKeysToInvalidateArgs) {
const keys = [`Query.${table}`, `Query.${table.toLocaleLowerCase()}`]

if (isRecordWithId(record)) {
keys.push(`${table}:${String(record.id)}`)
}

return keys
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function extracts id from record object, but the improved migration template sends recordId directly as a string. Should accept recordId parameter instead (see template line 15-27).

@Tobbe Tobbe enabled auto-merge (squash) February 20, 2026 03:19
@Tobbe Tobbe merged commit d5786d9 into main Feb 20, 2026
42 checks passed
@Tobbe Tobbe deleted the tobbe-reat-live-query-setup branch February 20, 2026 03:27
@github-actions
Copy link

The changes in this PR are now available in 3.0.0-canary.13430

Tobbe added a commit that referenced this pull request Mar 1, 2026
Adds CLI command `yarn cedar setup live-queries` to configure Postgres
LISTEN/NOTIFY-based live query invalidation. Creates database triggers
that send notifications on table changes, and a Node.js listener that
invalidates live query cache keys.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant