Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sockets.close behaviour when closed already rejected #24

Open
dom96 opened this issue Oct 27, 2023 · 3 comments
Open

sockets.close behaviour when closed already rejected #24

dom96 opened this issue Oct 27, 2023 · 3 comments

Comments

@dom96
Copy link
Collaborator

dom96 commented Oct 27, 2023

From cloudflare/workerd#1305 we've realised that we should define what should happen when close is called on a socket that already had its closed promise rejected.

When the closed promise is resolved then the answer is easy: close does nothing.

But when closed is rejected, we have a couple of options:

  • close throws the exception in closed
  • close doesn't throw and leaves the closed promise unchanged (so it remains rejected)

Workerd currently does a third option, which I think just leads to gotchas so I want to change it, where close doesn't throw but also resolves the closed promise which means the exception can be lost.

@kentonv
Copy link

kentonv commented Oct 27, 2023

IMO the close() method should actually wait for previous write()s to be flushed out and only succeed if all of those writes succeeded. Apparently the current implementation only queues the close to be applied later on, and so any write errors that occur after close() has been called don't get reported. I'm not sure what the streams spec says exactly but this seems contrary to the usual semantics of close() in other systems -- normally close() implies a flush.

@lyc8503
Copy link

lyc8503 commented Oct 27, 2023

IMO the close() method should actually wait for previous write()s to be flushed out and only succeed if all of those writes succeeded. Apparently the current implementation only queues the close to be applied later on, and so any write errors that occur after close() has been called don't get reported. I'm not sure what the streams spec says exactly but this seems contrary to the usual semantics of close() in other systems -- normally close() implies a flush.

This is exactly what I expect it to be at the beginning. This seems like a more reasonable behaviour. (Though I don't know why it's different at Workers.)

Maybe at least point this difference out in the docs?

@jasnell
Copy link
Collaborator

jasnell commented Nov 5, 2023

Let's separate the operation of the close() method from the resolution of the returned promise.

If I call socket.close(), if the socket is not already closed, or if closing has not already been initiated, then close is initiated and the closed promise is returned. The closed promise will be resolved when closing has completed. This means that if I call close() twice synchronously without waiting, the second call is effectively a non-op.

const p1 = socket.close();  // initiates closing, returns closed
const p2 = socket.close();  // closing already initiated, returns closed
p1 === p2;  // true 

The promise returned by closed is also the same promise provided by socket.closed.

When the socket close completes, the closed promise is resolved.

The second socket.close() above, notably, does not throw synchronously at all. All it does is initiate the socket close operation if it hasn't already been initiated and return the closed promise. A simple pseudo-code implementation would be:

function close() {
  if (!closing) startClosing();
  return this.closed;
}

function startClosing() {
  closing = true;
  if (hasPendingWrites) pendingWrites.then(() => doClose());
  else doClose();
}

If the closed promise is rejected, note that this implementation just does the right thing. That is, if the closed promise is rejected, then the close operation was already initiated and therefore close() won't try to do so again. It would simply return the already rejected promise. (This is the second of the two options you have above @dom96). The current behavior is definitely incorrect.

Separately, to what @kentonv was saying, initiating a close does mean waiting for all currently pending writes to complete and be flushed. So, for instance, if I have:

const writer = socket.writable.getWriter();
const w1 = writer.write(enc.encode('hello'));
const w2 = writer.write(enc.encode('there'));
const p1 = writer.close();  // p1 resolves only after w1 and w2 are resolved
const w3 = writer.write(enc.encode('!!'));  // rejects immediately because close has been called, even tho closed maybe has not yet resolved
await Promise.all([w1, w2, p1]);

The call to socket.close() will initiate closing immediately such that no additional writes can be scheduled on the writer, but the two prior writes will be flushed to the socket before the close operation itself is flushed and the closed promise itself is resolved.

If either of the pending prior writes (w1 or w2) fail, then that failure should cascade to all pending writes and the pending close (e.g. if w1 rejects, w2 and p1 will reject with the same error. w3 will have already been rejected since the additional write-after-close is not allowed).

In other words, close() itself does not force a flush, particular since there is no such flush mechanism is the streams standard. The close is itself an async operation that remains pending while there are writes still in the queue. But after close is called, no new writes can be scheduled in the queue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants