Skip to content

Prep for usage: bulk API, export, WebSocket, mount hardening, E2E#4

Merged
khaliqgant merged 17 commits intomainfrom
prep-for-usage
Mar 24, 2026
Merged

Prep for usage: bulk API, export, WebSocket, mount hardening, E2E#4
khaliqgant merged 17 commits intomainfrom
prep-for-usage

Conversation

@khaliqgant
Copy link
Member

@khaliqgant khaliqgant commented Mar 24, 2026

Summary

  • Bulk API & Export: Bulk seed endpoint, workspace export (tar/patch), binary file support
  • WebSocket push: Real-time file-change notifications via WebSocket
  • Mount hardening: Conflict resolution, bidirectional sync improvements
  • E2E infrastructure: Test script, workflow definitions for 6 phases (E2E, bulk/export, dev experience, CI/publish, hosted server, landing page)
  • SDK updates: TypeScript SDK extended with bulk, export, and WebSocket client methods
  • Design docs: Bulk export design, knowledge graph spec, mount hardening plan

This PR will be continuously updated as the workflows run.

Test plan

  • E2E test: Go server + two mount daemons sync bidirectionally
  • Bulk seed + workspace export
  • WebSocket push notifications
  • CLI: login/mount/seed/export
  • CI pipeline: Go tests + SDK typecheck + E2E
  • Hosted server deployment
  • Landing page at relayfile.dev

🤖 Generated with Claude Code


Open with Devin

…E2E infrastructure

Extends the Go server with bulk seed/export endpoints, WebSocket file-change
notifications, and binary file support. Hardens mount sync with conflict
resolution and bidirectional sync. Adds E2E test script, workflow definitions,
design docs, and updated TypeScript SDK.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
devin-ai-integration[bot]

This comment was marked as resolved.

khaliqgant and others added 7 commits March 24, 2026 15:18
Adds relayfile-cli with login/mount/seed/export commands, Homebrew tap
formula, GitHub Actions release workflow, install script, and user-facing
docs (API reference, CLI design, guides). Updates .gitignore to exclude
compiled binaries and agent tool configs.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds OIDC-based npm publish with provenance for the TypeScript SDK.
Includes npm update step per prpm trusted publishing guidance to avoid
outdated npm versions on runners.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds dedicated GitHub Actions workflows for CI (tests + typecheck),
npm publishing, and Go binary releases. Updates SDK package.json,
tsconfig, and README. Adds CI/CD design doc.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds sdk/relayfile package that downloads the correct platform binary
on install, so users can `npx relayfile` or `npm install -g relayfile`.
Updates publish-npm workflow to use OIDC (no NPM_TOKEN), adds
npm update step, and publishes both @relayfile/sdk and relayfile
packages with version sync.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Configures root package.json with workspaces pointing to sdk/relayfile-sdk
and sdk/relayfile. Makes CLI postinstall non-fatal so installs work before
a release binary exists.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Renames sdk/relayfile-sdk and sdk/relayfile to packages/. Updates all
workflow files, GitHub Actions, and package.json references to use the
new paths.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Provenance only works in CI with OIDC — removing from package.json so
local publishes work. CI workflows already pass --provenance explicitly.
Also normalizes repository URLs to avoid npm publish warnings.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
devin-ai-integration[bot]

This comment was marked as resolved.

khaliqgant and others added 4 commits March 24, 2026 16:19
Astro + Tailwind marketing site with hero, feature grid, architecture
diagram, API preview, and use cases sections.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
CI was failing with "workflow file issue" due to npm cache requiring
a lockfile path and the workers-typecheck job referencing packages/server
which doesn't exist. Simplified to use go.mod for Go version.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Resolves merge conflicts from sdk/ -> packages/ rename.

Fixes:
- Tar export: log error instead of writing JSON after headers sent
- filepath.Clean -> path.Clean for OS-independent tar entry names
- Bulk write: reject files on store read errors instead of skipping
  permission checks
- WebSocket: load catch-up events before subscribing to avoid duplicates
- normalizeEncoding: return empty string for utf-8 to preserve omitempty

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
devin-ai-integration[bot]

This comment was marked as resolved.

Adds FilesystemEventType, EventOrigin, OperationStatus, WritebackState,
SyncProviderStatus, SyncProviderStatusState to the SDK's public exports.
These are needed by relayfile-cloud which now imports types from the SDK
instead of duplicating them.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
devin-ai-integration[bot]

This comment was marked as resolved.

Devin review fixes:
1. Tar export: return proper error response instead of empty 200
2. WebSocket: subscribe-before-catchup to prevent missed events,
   with dedup via EventID to avoid replaying catch-up events
3. CLI seed: single progress line instead of N instant messages
4. Bulk write: normalize file path before passing to BulkWrite
   (matches permission check path)

New E2E tests:
- Bulk write API: creates 5 files, verifies count + no errors
- JSON export: verifies non-empty array response
- Tar export: verifies gzip content-type and magic bytes
- WebSocket: connects, writes file, verifies event received
devin-ai-integration[bot]

This comment was marked as resolved.

All new E2E tests (bulk write, export, WebSocket) were using raw
fetch() without the required X-Correlation-Id header, causing 400
responses. Switched to the existing api() helper which includes
the header automatically. Also fixed export route path and file
write method/endpoint.
devin-ai-integration[bot]

This comment was marked as resolved.

1. Tar export: split into prepareTarExport (can return error response)
   and streamTarExport (headers committed, errors logged only).
   Prep errors (bad base64, etc.) now return proper HTTP 500.

2. WS reader goroutine: derive context from caller instead of
   context.Background() so it's cancelled on syncer shutdown.

3. E2E WebSocket: fire-and-forget the bulk write that triggers the
   event — prevents unhandled rejection when server shuts down
   before the response arrives.
@khaliqgant khaliqgant merged commit 35bf2e9 into main Mar 24, 2026
5 checks passed
Copy link
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 1 new potential issue.

View 18 additional findings in Devin Review.

Open in Devin Review

return err
}

readCtx, cancel := context.WithCancel(ctx)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 WebSocket connection torn down after every sync cycle due to short-lived parent context

The connectWebSocket method at internal/mountsync/syncer.go:425 derives readCtx from the ctx parameter of SyncOnce. Both callers (cmd/relayfile-mount/main.go:68-69 and cmd/relayfile-cli/main.go:365-366) create a short-lived timeout context (default 15s) per sync cycle: ctx, cancel := context.WithTimeout(rootCtx, *timeout) with defer cancel(). When SyncOnce returns and the timeout context is canceled, readCtx — a child of that context — is also canceled. This causes wsjson.Read(readCtx, conn, &event) in readWebSocketLoop to fail with context.Canceled, tearing down the WebSocket connection. On the next sync cycle, needsWS evaluates to true again (since handleWebSocketDisconnect sets s.wsConn = nil), causing a reconnect. This connect/disconnect cycle repeats every sync interval, making WebSocket real-time streaming completely non-functional — the connection never lives long enough to receive any live events.

Call chain showing the context propagation

run() creates context.WithTimeout(rootCtx, 15s) → passes to SyncOnce(ctx) → passes to connectWebSocket(ctx)readCtx, cancel := context.WithCancel(ctx)go readWebSocketLoop(readCtx, conn) → when run() returns, defer cancel() kills readCtx → read loop exits → handleWebSocketDisconnect sets wsConn = nil.

Prompt for agents
In internal/mountsync/syncer.go, the readCtx for the WebSocket read loop must NOT be derived from the per-sync-cycle ctx parameter. Instead, it should use a longer-lived context that survives across sync cycles.

Option 1: Store a long-lived cancel context on the Syncer struct, created once (e.g. in NewSyncer or on first connect), and use that as the parent for readCtx instead of the SyncOnce ctx.

Option 2: Use context.Background() as the parent for readCtx, relying on the wsCancel function for cleanup.

The fix at line 425 of internal/mountsync/syncer.go should change from:
  readCtx, cancel := context.WithCancel(ctx)
to something like:
  readCtx, cancel := context.WithCancel(context.Background())

This ensures the WebSocket read loop survives beyond a single SyncOnce call. The handleWebSocketDisconnect method already handles cleanup via wsCancel.
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant