Skip to content

Address PR review feedback: enhance safety, documentation, and test coverage for stream lock detection#761

Merged
TooTallNate merged 2 commits into12-20-fix_stream_serialization_to_resolve_when_user_releases_lock_instead_of_waiting_for_stream_to_closefrom
copilot/sub-pr-678
Jan 9, 2026
Merged

Address PR review feedback: enhance safety, documentation, and test coverage for stream lock detection#761
TooTallNate merged 2 commits into12-20-fix_stream_serialization_to_resolve_when_user_releases_lock_instead_of_waiting_for_stream_to_closefrom
copilot/sub-pr-678

Conversation

Copy link
Copy Markdown
Contributor

Copilot AI commented Jan 9, 2026

Description

Addresses review feedback on the stream serialization lock detection implementation. Key improvements:

Documentation

  • Documented LOCK_POLL_INTERVAL_MS polling rationale: Web Streams API provides no lock-release event, requiring periodic reader/writer acquisition attempts. 100ms balances detection latency against serverless execution cost.
  • Clarified lock re-acquisition is unsupported for triggering additional completion signals
  • Updated documentation to reflect that explicit close() is recommended but not required to prevent hanging

Safety & Correctness

  • Added protection against concurrent polling on the same state (stores interval IDs in FlushableStreamState)
  • Fixed race condition: now checks state.streamEnded immediately after read, before starting write operations
  • Made error handling explicit in lock detection functions with detailed comments explaining conservative failure handling
  • Documented dual error propagation strategy (state.reject() + re-throw) for different caller patterns

Test Coverage

  • Write errors during pipe operations
  • pollReadableLock behavior (previously only pollWritableLock tested)
  • Concurrent write operations
  • Prevention of duplicate polling intervals
  • Stream termination while operations in-flight

How did you test your changes?

Added 6 new test cases covering error handling, concurrent operations, and edge cases in flushable-stream.test.ts. All tests validate the polling mechanism and state management under various failure scenarios.

Verified build and lint pass for core package.

PR Checklist - Required to merge

  • 📦 pnpm changeset was run to create a changelog for this PR
    • During beta, we only use "patch" mode for changes. Don't tag minor/major versions.
    • Use pnpm changeset --empty if you are changing documentation or workbench apps
  • 🔒 DCO sign-off passes (run git commit --signoff on your commits)

💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.

Copilot AI self-assigned this Jan 9, 2026
@changeset-bot
Copy link
Copy Markdown

changeset-bot Bot commented Jan 9, 2026

⚠️ No Changeset found

Latest commit: 55d4802

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

@vercel
Copy link
Copy Markdown
Contributor

vercel Bot commented Jan 9, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Review Updated (UTC)
example-nextjs-workflow-turbopack Ready Ready Preview, Comment Jan 9, 2026 9:27pm
example-nextjs-workflow-webpack Ready Ready Preview, Comment Jan 9, 2026 9:27pm
example-workflow Ready Ready Preview, Comment Jan 9, 2026 9:27pm
workbench-astro-workflow Ready Ready Preview, Comment Jan 9, 2026 9:27pm
workbench-express-workflow Ready Ready Preview, Comment Jan 9, 2026 9:27pm
workbench-fastify-workflow Ready Ready Preview, Comment Jan 9, 2026 9:27pm
workbench-hono-workflow Ready Ready Preview, Comment Jan 9, 2026 9:27pm
workbench-nitro-workflow Ready Ready Preview, Comment Jan 9, 2026 9:27pm
workbench-nuxt-workflow Ready Ready Preview, Comment Jan 9, 2026 9:27pm
workbench-sveltekit-workflow Ready Ready Preview, Comment Jan 9, 2026 9:27pm
workbench-vite-workflow Ready Ready Preview, Comment Jan 9, 2026 9:27pm
workflow-docs Ready Ready Preview, Comment Jan 9, 2026 9:27pm

…est coverage

Co-authored-by: TooTallNate <71256+TooTallNate@users.noreply.github.com>
Copy link
Copy Markdown
Contributor Author

Copilot AI commented Jan 9, 2026

Just as a heads up, I was blocked by some firewall rules while working on your feedback. Expand below for details.

@TooTallNate TooTallNate marked this pull request as ready for review January 9, 2026 23:04
Copilot AI review requested due to automatic review settings January 9, 2026 23:04
@TooTallNate TooTallNate merged commit d15050d into 12-20-fix_stream_serialization_to_resolve_when_user_releases_lock_instead_of_waiting_for_stream_to_close Jan 9, 2026
16 checks passed
@TooTallNate TooTallNate deleted the copilot/sub-pr-678 branch January 9, 2026 23:04
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR addresses review feedback on the stream serialization lock detection implementation by enhancing safety, documentation, and test coverage. The changes focus on making the polling mechanism more robust and well-documented.

Key changes:

  • Added comprehensive documentation explaining the 100ms polling interval rationale and tradeoffs
  • Implemented protection against concurrent polling operations on the same stream state
  • Fixed a race condition by moving the streamEnded check to immediately after read operations
  • Enhanced error handling with detailed comments in lock detection functions

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.

File Description
packages/core/src/flushable-stream.ts Added polling interval documentation, concurrent polling protection via interval ID tracking, race condition fix, and enhanced error handling with split try-catch blocks
packages/core/src/flushable-stream.test.ts Added 6 new test cases covering write errors, pollReadableLock behavior, concurrent writes, duplicate polling prevention, and stream termination scenarios
docs/content/docs/foundations/streaming.mdx Clarified lock re-acquisition behavior and that explicit close is recommended but not required to prevent hanging

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.


<Callout type="warn">
**The lock MUST be released to prevent the function from hanging.** If you acquire a lock but never release it, the serverless function will remain active until it times out, even after the step returns and the workflow continues.
**The lock MUST be released to prevent the function from hanging.** If you acquire a lock but never release it, the serverless function will remain active until it times out, even after the step returns and the workflow continues. Releasing the lock is sufficient; while explicitly closing the stream is safe and recommended for cleanup, it is not required to prevent the function from hanging.
Copy link

Copilot AI Jan 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sentence structure is confusing with multiple nested clauses. Consider breaking this into two clearer sentences: "The lock MUST be released to prevent the function from hanging. If you acquire a lock but never release it, the serverless function will remain active until it times out, even after the step returns and the workflow continues. Releasing the lock is sufficient; explicitly closing the stream is safe and recommended for cleanup but not required to prevent hanging."

Suggested change
**The lock MUST be released to prevent the function from hanging.** If you acquire a lock but never release it, the serverless function will remain active until it times out, even after the step returns and the workflow continues. Releasing the lock is sufficient; while explicitly closing the stream is safe and recommended for cleanup, it is not required to prevent the function from hanging.
**The lock MUST be released to prevent the function from hanging.** If you acquire a lock but never release it, the serverless function will remain active until it times out, even after the step returns and the workflow continues. Releasing the lock is sufficient. Explicitly closing the stream is safe and recommended for cleanup, but it is not required to prevent the function from hanging.

Copilot uses AI. Check for mistakes.
Comment on lines +234 to +257
it('should prevent multiple simultaneous polling operations on writable', async () => {
const { readable, writable } = new TransformStream<string, string>();
const mockSink = new WritableStream<string>();
const state = createFlushableState();

// Start piping in background
flushablePipe(readable, mockSink, state).catch(() => {});

// Start polling multiple times
pollWritableLock(writable, state);
pollWritableLock(writable, state);
pollWritableLock(writable, state);

// Should only have one interval active
expect(state.writablePollingInterval).toBeDefined();

// Write and release to clean up
const userWriter = writable.getWriter();
await userWriter.write('data');
userWriter.releaseLock();

// Wait for cleanup
await new Promise((r) => setTimeout(r, LOCK_POLL_INTERVAL_MS + 50));
});
Copy link

Copilot AI Jan 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test validates that duplicate polling calls don't create multiple intervals, but it doesn't verify that the interval is actually cleaned up after the operation completes. Consider adding an assertion after the wait to check that state.writablePollingInterval is undefined after cleanup, which would confirm the interval was properly cleared.

Copilot uses AI. Check for mistakes.
Comment on lines +259 to +286
it('should prevent multiple simultaneous polling operations on readable', async () => {
let controller: ReadableStreamDefaultController<string>;
const source = new ReadableStream<string>({
start(c) {
controller = c;
},
});

const mockSink = new WritableStream<string>();
const state = createFlushableState();

// Start piping in background
flushablePipe(source, mockSink, state).catch(() => {});

// Start polling multiple times
pollReadableLock(source, state);
pollReadableLock(source, state);
pollReadableLock(source, state);

// Should only have one interval active
expect(state.readablePollingInterval).toBeDefined();

// Close to clean up
controller?.close();

// Wait for cleanup
await new Promise((r) => setTimeout(r, 100));
});
Copy link

Copilot AI Jan 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test validates that duplicate polling calls don't create multiple intervals, but it doesn't verify that the interval is actually cleaned up after the operation completes. Consider adding an assertion after the wait to check that state.readablePollingInterval is undefined after cleanup, which would confirm the interval was properly cleared.

Copilot uses AI. Check for mistakes.

<Callout type="warn">
**Once a lock is released, no further writes to that stream from that step are allowed.** The framework uses lock release as the signal that the step is done interacting with the stream. Make sure all writes are complete before releasing the lock.
**Once a lock is released, no further writes to that stream from that step are allowed.** The framework uses lock release as the signal that the step is done interacting with the stream. Make sure all writes are complete before releasing the lock, and do not rely on re-acquiring a lock on the same stream within the same step after it has been released, even if internal implementation details might technically allow it.
Copy link

Copilot AI Jan 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sentence is quite long and has some redundancy. Consider simplifying to: "The framework uses lock release as the signal that the step is done interacting with the stream. Make sure all writes are complete before releasing the lock. Re-acquiring a lock on the same stream within the same step after it has been released is not supported."

Suggested change
**Once a lock is released, no further writes to that stream from that step are allowed.** The framework uses lock release as the signal that the step is done interacting with the stream. Make sure all writes are complete before releasing the lock, and do not rely on re-acquiring a lock on the same stream within the same step after it has been released, even if internal implementation details might technically allow it.
**Once a lock is released, no further writes to that stream from that step are allowed.** The framework uses lock release as the signal that the step is done interacting with the stream. Make sure all writes are complete before releasing the lock. Re-acquiring a lock on the same stream within the same step after it has been released is not supported.

Copilot uses AI. Check for mistakes.
TooTallNate added a commit that referenced this pull request Jan 12, 2026
…overage for stream lock detection (#761)

* Initial plan

* Address PR review comments: improve docs, add safety checks, expand test coverage

Co-authored-by: TooTallNate <71256+TooTallNate@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: TooTallNate <71256+TooTallNate@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants