-
Notifications
You must be signed in to change notification settings - Fork 48.8k
[Flight] Add Debug Channel option for stateful connection to the backend in DEV #33627
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
For now it just immediately releases objects that are decoded.
The API is prepared to accept duplex streams to allow debug info to be sent through this channel. Add WebSocket support to the Server Bindings This is a slightly different protocol than Duplex but likely to be commonly used.
@@ -42,12 +43,31 @@ type CallServerCallback = <A, T>(string, args: A) => Promise<T>; | |||
|
|||
export type Options = { | |||
callServer?: CallServerCallback, | |||
debugChannel?: {writable?: WritableStream, ...}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Notably the client is currently only included in the Browser builds. I guess in theory it could be useful to have a live connection between servers too.
This accepts a writable: WritableStream
which is the shape that is provided by the modern WebSocketStream API. It's aligned with all our other modern usages for browsers - Web Streams.
Unfortunately WebSocketStream is not available in Safari or Firefox so those need a polyfill. I didn't add support for the more widely supported WebSocket
API since that would also affect the readable
directly where we stopped supporting the XHR shape. So this is just left up to user space for now to polyfill or use a different transport protocol.
Ironically the WebSocket
shape is commonly in Node.js and it has similar but not exact shape as Node Streams so I added support for the Web shape of WebSocket
in Node.
@@ -104,6 +125,7 @@ function startReadingFromStream( | |||
} | |||
|
|||
export type Options = { | |||
debugChannel?: {writable?: WritableStream, ...}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@devongovett You might want to remove these options and instead just wire it up automatically in development to any HMR sockets in Parcel automatically on both ends.
process.env.NODE_ENV === 'development' && | ||
typeof WebSocketStream === 'function' | ||
) { | ||
const requestId = crypto.randomUUID(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is used to associate the WebSocket request with the fetch request.
In theory, this system can work with multiple development servers. E.g. it's resilient to the server restarting (loses the connection) which is nice.
However, if you have multiple servers that might respond to the socket request vs the fetch, you end up with different servers answering.
Another approach would be to also just make the RSC request through the WebSocket.
// We earlier deferred this same object. We're now going to eagerly emit it so let's emit it | ||
// at the same ID that we already used to refer to it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we emit eagerly when we see it twice?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We only get here if we have passed the objectLimit check above. Either because something else with a high limit also wrote this object or because we're now asking for it from the client.
If there's still an object limit being enforced we wouldn't get here no matter how many times we see the same object.
This adds plumbing for opening a stream from the Flight Client to the Flight Server so it can ask for more data on-demand. In this mode, the Flight Server keeps the connection open as long as the client is still alive and there's more objects to load. It retains any depth limited objects so that they can be asked for later. In this first PR it just releases the object when it's discovered on the server and doesn't actually lazy load it yet. That's coming in a follow up.
This strategy is built on the model that each request has its own channel for this. Instead of some global registry. That ensures that referential identity is preserved within a Request and the Request can refer to previously written objects by reference.
The fixture implements a WebSocket per request but it doesn't have to be done that way. It can be multiplexed through an existing WebSocket for example. The current protocol is just a Readable(Stream) on the server and WritableStream on the client. It could even be sent through a HTTP request body if browsers implemented full duplex (which they don't).
This PR only implements the direction of messages from Client to Server. However, I also plan on adding Debug Channel in the other direction to allow debug info (optionally) be sent from Server to Client through this channel instead of through the main RSC request. So the
debugChannel
option will be able to take writable or readable or both.