Skip to content

Conversation

anonrig
Copy link
Contributor

@anonrig anonrig commented Oct 10, 2025

We saw that under high concurrency, Nth request was taking a lot more time than others. This was caused by the composable cache. Prior to this pull-request, cache was consuming all data stream (the response of the page), just to access it and later convert it to a stream (and discard the data cached). This created a lot of GC pressure and caused a lot of unnecessary memory usage.

Subset of #1002

Copy link

changeset-bot bot commented Oct 10, 2025

🦋 Changeset detected

Latest commit: 0a946d3

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 2 packages
Name Type
@opennextjs/aws Patch
app-pages-router Patch

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

Copy link

pkg-pr-new bot commented Oct 10, 2025

Open in StackBlitz

pnpm add https://pkg.pr.new/@opennextjs/aws@1004

commit: 0a946d3

const teedPromise = pendingEntry.then((entry) => {
// Optimization: We avoid consuming and stringifying the stream here,
// because it creates double copies just to be discarded when this function
// ends. This avoids unnecessary memory usage, and reduces GC pressure.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't this cause an issue if we read this more than once in the get ? We then have a single ReadableStream that multiple reader would try to access.
We could teed on get as well, but it means we'd end up with a copy in memory and multiple ReadableStream, not sure if it's worth it or not ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

well, the goal is to avoid having unnecessary memory allocations. Both optimizations have quirks, but right now, concurrent calls get stalled and have more impact due to this unnecessary copy. i'm open to suggestions.

Copy link
Contributor

@jasnell jasnell Oct 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might take a bit of experimentation to get exactly right. I'm also not convinced that this new bit is correct either. Another potential approach you could take would be to use a Blob to buffer the data and use it's .stream() method to get individual ReadableStream instances that read from the single in memory buffer. That way you're not relying on, or victim to, ReadableStream tees terrible default backpressure management that ends up creating multiple buffers in memory. But it's going to take a bit of careful thought to get right.

Regardless, I think this is going to require some careful consideration to get correct. @anonrig is absolutely correct that the current code is needlessly bottlenecking with too many additional copies.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've taken another shot and stored a blob on the composable cache rather than text or stream. So that, any caller can create a readable stream from the blob with almost no cost.

@anonrig anonrig force-pushed the yagiz/improve-composable-cache branch 2 times, most recently from eae4d28 to 7ee033e Compare October 10, 2025 16:43
@anonrig anonrig force-pushed the yagiz/improve-composable-cache branch from 7ee033e to 2d435bf Compare October 10, 2025 16:44
@anonrig anonrig force-pushed the yagiz/improve-composable-cache branch from 3a3dacf to 3f4c9a5 Compare October 10, 2025 17:30
@anonrig anonrig requested review from conico974 and jasnell October 10, 2025 17:42
@anonrig anonrig force-pushed the yagiz/improve-composable-cache branch 4 times, most recently from 1e06ab6 to 41f2bfa Compare October 10, 2025 19:10
...entry,
value: entry.value,
},
entryWithBlob,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pretty sure that the incrementalCache here expect a string for entry.value for the set here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually changed the type, so we store a blob now.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But that's not enough, if we'd want to do that we need to change every implementation here and the ones in cloudflare.
And I'm not sure what we gain from doing that really, someone will serialize that in the incrementalCache.set anyway.
We can use a blob in the map, but not for the incremental cache

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, so that would be problematic. because if we store it as string, we would need to convert it into stream everytime we need to interact with it, since next.js waits for ReadableStream. We want to avoid having this unnecessary allocation.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah but that's a big breaking change (definitely major).
And that's actually already kind of planned for next major, but i think it requires some thought about how we want to store it really.
The way I see it is an entire refactor of the incremental cache and splitting the data into actual data (that would be blob/stream) and metadata that would be serializable small data

@anonrig anonrig force-pushed the yagiz/improve-composable-cache branch from 41f2bfa to 0a946d3 Compare October 10, 2025 19:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants