You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched existing issues to ensure the bug has not already been reported
Fastify version
4.26.1
Plugin version
9.0.0
Node.js version
21.6.2
Operating system
Linux
Operating system version (i.e. 20.04, 11.3, 10)
Arch Linux (latest)
Description
For every incoming connection, createWebSocketStream is called which creates a Duplex wrapper around the WebSocket object. The code in ws that does that, when setting up a listener that pushes data into the Readable part of the Duplex also checks if the data can actually be pushed, pausing the WebSocket in case of backpressure.
This happens regardless of if the developer actually intends on using the stream or not, since @fastify/websocket always creates a stream out of the WebSocket.
This problem was mentioned in #80, it happens around "16kB" of data because that's the default highWaterMark for node.js. The solution submitted in #81 attempts to work around the problem by trying to resume the WebSocket when a new listener is added to it, but that event is not emitted when interacting with the underlying webSocketServer instead of using the socket from the callback. @trpc/server seems to do that (The issue has been reported to them - trpc/trpc#5530).
The proposed solution would be to either:
Introduce a breaking change that removes the stream creation and just supplies WebSocket in the callback.
Introduce an option that tells the plugin if the stream should be created or not.
Add more hacks to resume the stream.
First option is in my experience the best outcome since it will reduce the complexity of the code here and as far as I can see, not many projects are using the stream itself, most interacting with the socket property only. The developer can still call createWebSocketStream manually on the resulting WebSocket, so no functionality loss is caused by this change.
Second option introduces maintenance complexity and will be tricky to write TypeScript types for. Third option is what's currently happening and... that shouldn't be happening in an official Fastify branded project. The current logic causes memory leaks (limited up to 16kB per client though).
If the first option is something you'd like to proceed with I can create a pull request.
Sending over 16kB of data into that connection will result in no more messages being accepted by the server with outgoing messages being sent to the client just fine.
…ure issues (#289) (#290)
* fix: provide websocket instead of stream to avoid potential backpressure issues (#289)
* chore: update documentation and tests
* chore: update documentation to include an example of using createWebSocketStream
* chore: update failing unit test
Prerequisites
Fastify version
4.26.1
Plugin version
9.0.0
Node.js version
21.6.2
Operating system
Linux
Operating system version (i.e. 20.04, 11.3, 10)
Arch Linux (latest)
Description
For every incoming connection,
createWebSocketStream
is called which creates aDuplex
wrapper around theWebSocket
object. The code inws
that does that, when setting up a listener that pushes data into theReadable
part of theDuplex
also checks if the data can actually be pushed, pausing theWebSocket
in case of backpressure.https://github.com/websockets/ws/blob/5e42cfdc5fa114659908eaad4d9ead7d5051d740/lib/stream.js#L64
This happens regardless of if the developer actually intends on using the stream or not, since
@fastify/websocket
always creates a stream out of theWebSocket
.This problem was mentioned in #80, it happens around "16kB" of data because that's the default
highWaterMark
for node.js. The solution submitted in #81 attempts to work around the problem by trying to resume theWebSocket
when a new listener is added to it, but that event is not emitted when interacting with the underlyingwebSocketServer
instead of using the socket from the callback. @trpc/server seems to do that (The issue has been reported to them - trpc/trpc#5530).The proposed solution would be to either:
WebSocket
in the callback.First option is in my experience the best outcome since it will reduce the complexity of the code here and as far as I can see, not many projects are using the stream itself, most interacting with the
socket
property only. The developer can still callcreateWebSocketStream
manually on the resultingWebSocket
, so no functionality loss is caused by this change.Second option introduces maintenance complexity and will be tricky to write TypeScript types for. Third option is what's currently happening and... that shouldn't be happening in an official Fastify branded project. The current logic causes memory leaks (limited up to 16kB per client though).
If the first option is something you'd like to proceed with I can create a pull request.
Steps to Reproduce
Sending over 16kB of data into that connection will result in no more messages being accepted by the server with outgoing messages being sent to the client just fine.
See also, reproduction with TRPC: https://github.com/mat-sz/trpc-wslink-bug-reproduction
Expected Behavior
No response
The text was updated successfully, but these errors were encountered: