Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simplify the closing handshake #1902

Closed
wants to merge 3 commits into from
Closed

Conversation

lpinca
Copy link
Member

@lpinca lpinca commented Jun 14, 2021

  • When the socket emits the 'end' event, do not call socket.end().
    End only the receiver stream.
  • Do not wait for a close frame to be received and sent before calling
    socket.end(). Call it right after the close frame is written.
  • When the receiver stream emits 'finish', send a close frame if no
    close frame is received.

The assumption is that the socket is allowed to be half-open. On the
server side this is always true (unless the user explicitly sets the
allowHalfOpen property of the socket to false). On the client side
the user might use an agent so we set socket.allowHalfOpen to true
when the http.ClientRequest object emits the 'upgrade' event.

Refs: #1899

@lpinca
Copy link
Member Author

lpinca commented Jun 14, 2021

cc: @pimterry

This is based on our discussion in #1899. There are some small things to discuss but I'm a bit in hurry. I'll list them tomorrow.

@lpinca lpinca force-pushed the simplify/closing-handshake branch 3 times, most recently from 0019b57 to cdc99a2 Compare June 15, 2021 09:12
@lpinca
Copy link
Member Author

lpinca commented Jun 15, 2021

There are some small things to discuss but I'm a bit in hurry. I'll list them tomorrow.

  • This branch
    if (err) return;
    was not covered by tests. It now is, but it is expected. The error happens because server.close() terminate all clients with websocket.terminate() (destroying the socket) and the remote peer now tries to send a close frame on a destroyed socket when its socket emits the 'end' event. It might make sense to change server.close() behavior so that it closes all connections gracefully with websocket.close(). This should eventually be done independently of this change.
  • The _closeFrameSent property was needed because socket.end() was called after a close frame was both received and sent. socket.end() is now called right after the close frame is sent so the _closeFrameSent property is no longer needed for that purpose. I did not remove it because its value is used to compute the value of the wasClean property of the CLoseEvent
    this.wasClean = target._closeFrameReceived && target._closeFrameSent;
    I've read the HTML WebSocket specification and tried to find some clue in the Web Platform Tests but it is not clear to me when the wasClean attribute should be set to true.
  • This patch is completely untested in real world scenarios. I don't know if it works as expected with other implementations of the WebSocket specification, for example in environments where there is no support for half-open sockets.

@pimterry
Copy link
Contributor

This is really interesting and looks very promising imo! Great work. I've been playing around with similar changes myself as well, and testing the code you posted before for Chrome's behaviour, to try and confirm how everything works here.

I'd quite like to try and finish putting some tests together to check exactly how the previous version behaved so we can check how that will change with these updates. In theory we should be able to simulate each of the possible open/half-open cases in tests from within node ourselves, which would make it much easier to be confident about the real-world impact.

I don't think there's any hurry here and I'm a bit squeezed for time right now, so I'm going to try and flesh those out first to explore this in a bit more depth, and come back to this with some testing and a proper review in a few days. Hope that's ok!

@lpinca
Copy link
Member Author

lpinca commented Jun 15, 2021

Sure, there is no hurry. Take your time.

@pimterry
Copy link
Contributor

I've just opened a PR with some extra tests to explore all this.

I haven't done thorough real world testing, just lots of playing around with the unit tests, but those tests clearly show broken behaviour on master, and working behaviour on this branch, which seems like a great sign to me 👍

On your points above:

It might make sense to change server.close() behavior so that it closes all connections gracefully with websocket.close().

That makes sense to me too, and I agree it can be done separately 👍

I don't know if it works as expected with other implementations of the WebSocket specification, for example in environments where there is no support for half-open sockets.

I haven't extensively tested other clients, but the tests here do some basic simulation of every combination of half/full-close supported peers, and at the very least they all seem to be able to shut down cleanly in simple cases, so we can be confident they're not completely broken.

I've read the HTML WebSocket specification and tried to find some clue in the Web Platform Tests but it is not clear to me when the wasClean attribute should be set to true.

The WebSocket RFC 7.1.4 says:

If the TCP connection was closed after the WebSocket closing handshake was completed, the WebSocket connection is said to have been closed cleanly.
If the WebSocket connection could not be established, it is also said that The WebSocket Connection is Closed, but not cleanly.

I think that's probably the clearest definition around: a websocket is cleanly closed only if we completed the handshake before the socket was fully closed, which AFAICT just requires that we sent and received a close frame.

@lpinca lpinca force-pushed the simplify/closing-handshake branch 5 times, most recently from f713a6e to 29c7f29 Compare June 24, 2021 10:14
lpinca and others added 3 commits June 24, 2021 12:32
- When the socket emits the `'end'` event, do not call `socket.end()`.
  End only the `receiver` stream.
- Do not wait for a close frame to be received and sent before calling
  `socket.end()`. Call it right after the close frame is written.
- When the `receiver` stream emits `'finish'`, send a close frame if no
  close frame is received.

The assumption is that the socket is allowed to be half-open. On the
server side this is always true (unless the user explicitly sets the
`allowHalfOpen` property of the socket to `false`). On the client side
the user might use an agent so we set `socket.allowHalfOpen` to `true`
when the `http.ClientRequest` object emits the `'upgrade'` event.

Refs: #1899
If we're already closing, then we shouldn't move into the new -2 ready
state (i.e. closing-because-the-remote-end-closed). We're already
closing, we shouldn't go back to a state that allows sending another
close frame.
@lpinca lpinca force-pushed the simplify/closing-handshake branch from 29c7f29 to 1d923fa Compare June 24, 2021 10:32
@lpinca
Copy link
Member Author

lpinca commented Jun 24, 2021

I amended the first commit with a minor change:

There is no need to wait for sender.close() callback before calling socket.end(). We can call it right after the close frame is written to the socket in sender.doClose().

diff --git a/lib/sender.js b/lib/sender.js
index ad71e19..18f0cf4 100644
--- a/lib/sender.js
+++ b/lib/sender.js
@@ -151,6 +151,7 @@ class Sender {
       }),
       cb
     );
+    this._socket.end();
   }
 
   /**
diff --git a/lib/websocket.js b/lib/websocket.js
index 1b652ca..fbe4a67 100644
--- a/lib/websocket.js
+++ b/lib/websocket.js
@@ -233,7 +233,6 @@ class WebSocket extends EventEmitter {
       if (err) return;
 
       this._closeFrameSent = true;
-      this._socket.end();
     });
 
     //

@lpinca
Copy link
Member Author

lpinca commented Jun 24, 2021

I'm a bit torn on how to handle this change.

  • It will certainly create issues if a ws client with this change connects to a ws server without this change, and vice versa.
  • If added in next major version it will create even more issues as many people (for good reasons) do not upgrade to a major version or do it very slowly.
  • I still don't know if it causes issues with non-ws clients or servers.

I think the best option is to include this in a minor release.

@cTn-dev, sorry for pinging you here, but If I remember correctly you were using permessage-deflate in production. Is there any chance you can help us testing this? Thank you.

@pimterry
Copy link
Contributor

I think we're actually being too aggressive with closure here I'm afraid. It's correct from a TCP POV to half-close as soon as we've sent everything, but the fully correct websockets behaviour is to close after a close frame is both sent + received, or to close immediately after sending a close frame only when we hit an error.

We do still want some of the previous behaviour! We don't want to remove that completely.

A bit of googling suggests some other servers will indeed fail in this case if we're connecting to them from node (e.g. when connecting to Akka servers, although they're quite apologetic about it: https://doc.akka.io/docs/akka-http/current/client-side/websocket-support.html#half-closed-client-websockets).

Relevant bits of the spec:

Once an endpoint has both sent and received a Close control frame, that endpoint SHOULD Close the WebSocket Connection

(i.e. after we have both close frames we should start half-closing to initiate TCP shutdown)

If The WebSocket Connection is Established prior to the point where the endpoint is required to Fail the WebSocket Connection, the endpoint SHOULD send a Close frame with an appropriate status code (Section 7.4) before proceeding to Close the WebSocket Connection.

(i.e. after an error we should send a close frame and only then immediately half-close to initiate TCP shutdown)

Sorry, I think this is my fault 😬. My explanation in #1899 wasn't clear and I think I've confused myself slightly en route too. This change is still required, but only for the case where we send a close frame after an error I think. Helpfully that does reduce the size of this change and the potential impact though!

@cTn-dev
Copy link
Contributor

cTn-dev commented Jun 24, 2021

I'm a bit torn on how to handle this change.

  • It will certainly create issues if a ws client with this change connects to a ws server without this change, and vice versa.
  • If added in next major version it will create even more issues as many people (for good reasons) do not upgrade to a major version or do it very slowly.
  • I still don't know if it causes issues with non-ws clients or servers.

I think the best option is to include this in a minor release.

@cTn-dev, sorry for pinging you here, but If I remember correctly you were using permessage-deflate in production. Is there any chance you can help us testing this? Thank you.

Sure, i can take it for a spin, is there a specific configuration/scenario you would like me to focus on/emulate?

@lpinca
Copy link
Member Author

lpinca commented Jun 24, 2021

(i.e. after we have both close frames we should start half-closing to initiate TCP shutdown)

Yes, that is my understanding as well and the reason why it is currently implemented is this way.

This change is still required, but only for the case where we send a close frame after an error I think. Helpfully that does reduce the size of this change and the potential impact though!

I still think that the easiest way to follow the spec recommendation of sending a close frame after an error is a best effort approach like the one suggested in #1892 (comment).

@cTn-dev no particular configuration. The ideal scenario would involve some incoming and/or outgoing data buffering on both peers and then see if there is data loss in any of the peer after the connection is closed, for example by monitoring their close code.

@pimterry
Copy link
Contributor

I still think that the easiest way to follow the spec recommendation of sending a close frame after an error is a best effort approach like the one suggested in #1892 (comment).

The problem with using destroy() rather than a clean TCP shutdown seems to be that it will send RST packets which may cause loss of pending good data in the remote peer unnecessarily. From the spec:

on some platforms, if a socket is closed with data in the receive queue, a RST packet is sent, which will then cause recv() to fail for the party that received the RST, even if there was data waiting to be read.

I think that means if we send a RST (due to pending incoming data on these platforms, or any incoming data that's already in transit) then the remote peer will lose data that was successfully sent before the error but which hadn't been read yet, and/or the close frame itself.

It's actually a bit worse in our case I think, because we might have data still being compressed in the sender which would block the close frame. That means the destroy() in your example would be called before we'd sent the close frame or pending data at all, so all of that would always be lost.

Aside from potential data loss, it'll also result in unnecessary socket errors on the remote peer, instead of a clean close frame + TCP shutdown, which is a mildly annoying for everybody.

I agree it's easier to just slam everything shut, but it is cleaner and more correct to end() as you added in #1899, and if we can do that I think we should. I'm pretty sure it's possible to do without extra issues, we just need to cover the last edge cases that are left over to make sure we always wait appropriately and make sure everything is tidied up properly.

I think doing so within this PR looks something like:

  • Keep the sent/received checks we had before.
  • Do end() immediately after a error close frame when the websocket connection fails (but not after normal close frames, and not with a destroy() immediately).
  • Keep all the other new changes here which add support for half-open sockets, so we cleanly shut down in that case too, and so that we handle receiver 'finish' & socket 'end' cleanly.

My understanding is that in that case:

  • Normal shutdown behaviour is more or less the same as before, so it's a pretty safe change.
  • We fix the current issue on master where a remote half-close means we immediately end and so lose all pending outgoing data.
  • We fix the edge case issue on master, created by Close the connection cleanly when an error occurs #1899, where simultaneous errors or similar issues mean we don't shutdown the socket at all until timeout. Now we would always end() after errors, so we'd only need the destroy() timeout if we get no socket error but the remote peer doesn't respond at all, which is the case that the timeout is designed for.

Does that make sense and sound sensible to you?

@lpinca
Copy link
Member Author

lpinca commented Jun 24, 2021

Does that make sense and sound sensible to you?

Yes.

@cTn-dev
Copy link
Contributor

cTn-dev commented Jun 24, 2021

(i.e. after we have both close frames we should start half-closing to initiate TCP shutdown)

Yes, that is my understanding as well and the reason why it is currently implemented is this way.

This change is still required, but only for the case where we send a close frame after an error I think. Helpfully that does reduce the size of this change and the potential impact though!

I still think that the easiest way to follow the spec recommendation of sending a close frame after an error is a best effort approach like the one suggested in #1892 (comment).

@cTn-dev no particular configuration. The ideal scenario would involve some incoming and/or outgoing data buffering on both peers and then see if there is data loss in any of the peer after the connection is closed, for example by monitoring their close code.

So, as of right now i don't really have a straight forward way to make sure something is in the buffers on each side.
I tried a custom local build (while comparing the sent vs received close code) which periodically opens and closes client connections.

The close event is returning a pretty frequent 1006 instead of the expected/client sent 1000.
This definitely wasn't the case on 7.4.6 or 7.5.0.

These early results should be taken with a grain of salt since the websocket server could be at "fault" here, i will have to dig deeper to see what is going on here.

@lpinca
Copy link
Member Author

lpinca commented Jun 24, 2021

@pimterry

  • We fix the current issue on master where a remote half-close means we immediately end and so lose all pending outgoing data.

There is actually a problem with this:

  1. A peer calls socket.end() without sending the closing frame.
  2. The 'end' event is emitted by the socket of the other peer.
  3. The ready state is changed to -2 and receiver.end() is called.
  4. The receiver then emits 'finish'. A close frame was not received so websocket.close() is called.
  5. A close frame is sent to the peer that originally called socket.end() at point 1.
  6. The callback of sender.close() is called. socket.end() is now called only if the receiver errored or if a close frame was receiver but neither of these conditions is true.

Does it make sense?

I'm not sure how to fix this if we want to keep the current behavior. If socket.end() should be called only after a close frame is sent and received, then receiving an 'end' event without a close frame means that the other peer is not behaving correctly.

@pimterry
Copy link
Contributor

receiving an 'end' event without a close frame means that the other peer is not behaving correctly.

Ok yes, I think you're right, that's basically true.

In happy behaviour with a clean close, they MUST always send a close frame first, that's quite clear.

If there's some kind of error somewhere, they might not, and they're still technically in the rules. It's encouraged (in 7.1.7: after errors they SHOULD send a close frame before starting to close the connection, they SHOULD NOT close the connection unexpectedly for other reasons) but not a strict MUST anywhere. If they half-shutdown a socket though they are doing something weird, and it seems very likely that either we or them have made an error at this point which is why the connection is being closed, which probably means they're ignoring anything we send anyway.

It's still important that we finish processing any inputs they sent before closing the socket (so we should accept any pending compressed or close frames) but AFAICT we do do that correctly already. I guess it seems reasonable to just immediately close our end of the connection at that point too, to dump any pending data, and call it an unclear 1006 closure.

There is actually a problem with this

I think we could work around the case you've got there with something like 'after the close frame is sent, if the socket is half-shutdown already then just end', but given the above I think you're totally right and we shouldn't worry about sending close frames and pending data in this case.

@lpinca
Copy link
Member Author

lpinca commented Jun 24, 2021

I think we could work around the case you've got there with something like 'after the close frame is sent, if the socket is half-shutdown already then just end'

Yes, I thought about that. We can do

if (
  this._closeFrameReceived ||
  this._receiver._writableState.finished ||
  this._receiver._writableState.errorEmitted
) {
  this._socket.end();
}

in the callback of sender.close() but it means breaking the "close the socket when a close frame is sent and received" contract.

@lpinca
Copy link
Member Author

lpinca commented Jun 24, 2021

Even with that check in place, a dead lock can be created if the user calls websocket.close() in the timeframe that goes from the socket 'end' event and the receiver 'finish' event. In this case, the user initiated websocket.close() would not call socket.end() because none of the above conditions is true and the 'finish' initiated websocket.close() would not call socket.end() because the ready state is CLOSING (2) but no close frame was received.

This is not an issue in the receiver error case because websocket.close() is called synchronously when the 'error' event is emitted.

I think I'm fine dropping buffered outgoing data and sending no close frame if the other peer ends the socket without sending a closing frame.

@pimterry
Copy link
Contributor

I think I'm fine dropping buffered outgoing data and sending no close frame is the other peer ends the socket without sending a closing frame.

I'm convinced, yeah this seems disproportionately complicated 👍

@lpinca
Copy link
Member Author

lpinca commented Jun 25, 2021

@pimterry besides the "simultaneous errors" there is another condition that results in a dead lock in master and is not addressed by adding || this._receiver._writableState.errorEmitted to this if condition

if (this._closeFrameReceived) this._socket.end();

The problem arises when

  • A peer is closing the close frame was sent but not received
  • The peer now receives an invalid frame.

To fix that we also need to change this

if (this._closeFrameSent && this._closeFrameReceived) this._socket.end();

to something like if (this._closeFrameSent && (this._closeFrameReceived || this._receiver._writableState.errorEmitted)).

I also tested how browsers handle the "socket end with no close frame" and the "error after open" cases.

  • In the former case both Safari and Chrome immediately close the connection without sending back a close frame.
  • In the latter Safari immediately closes the connection without sending back a close frame. Chrome sends a close frame but only if there is no buffered outgoing data.
const crypto = require('crypto');
const http = require('http');

const GUID = '258EAFA5-E914-47DA-95CA-C5AB0DC85B11';

const data = `<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8">
  </head>
  <body>
    <script>
      (function () {
        const ws = new WebSocket('ws://localhost:8080');

        ws.onopen = function () {
          console.log('open');
        };

        ws.onclose = function (evt) {
          console.log(evt.code);
        };
      })();
    </script>
  </body>
</html>`;

const server = http.createServer();

server.on('request', function (request, response) {
  response.setHeader('Content-Type', 'text/html');
  response.end(data);
});

server.on('upgrade', function (request, socket) {
  const key = crypto
    .createHash('sha1')
    .update(request.headers['sec-websocket-key'] + GUID)
    .digest('base64');

  socket.on('error', console.error);
  socket.on('data', function() {
    socket.write('ECONNRESET?');
  });

  socket.write(
    [
      'HTTP/1.1 101 Switching Protocols',
      'Upgrade: websocket',
      'Connection: Upgrade',
      `Sec-WebSocket-Accept: ${key}`,
      '\r\n'
    ].join('\r\n')
  );

  process.once('SIGINT', function () {
    console.log('Sending an invalid frame');
    socket.write(Buffer.from([0x85, 0x00]));
  });
});

server.listen(8080, function () {
  console.log('Listening on *:8080');
});

The script above triggers an ECONNRESET error when using Chrome on macOS as client. However I see no RST with tcpdump.

Finally, socket.write() followed by socket.destroy() triggers a RST only if the other peer writes to the socket. Yes, I'm still thinking that #1892 (comment) is the best thing to do. We are in an error condition so there is a good chance that the other peer is malfunctioning. We should not wait for all buffered data to be written, it might never happen given that the peer is malfunctioning, just try to send the close frame which might or might not be received.

lpinca added a commit that referenced this pull request Jun 25, 2021
Ensure `socket.end()` is called if an error occurs simultaneously on
both peers.

Refs: #1902
@lpinca lpinca mentioned this pull request Jun 25, 2021
lpinca added a commit that referenced this pull request Jun 26, 2021
Ensure that `socket.end()` is called if an error occurs simultaneously
on both peers.

Refs: #1902
@lpinca
Copy link
Member Author

lpinca commented Jun 28, 2021

The script above triggers an ECONNRESET error when using Chrome on macOS as client. However I see no RST with tcpdump.

I see it on Windows.

rst

Tested with IE 11 19043.1081, Firefox 89.0.2, and Chrome 91.0.4472.114.

@pimterry
Copy link
Contributor

socket.write() followed by socket.destroy() triggers a RST only if the other peer writes to the socket.

That should be true, I agree, but as documented in the spec, some platforms will RST anyway in some cases, e.g. if there is more data waiting in our receive queue. Reports like libuv/libuv#3034 link to various specific cases where Windows will send RSTs in node if you try to shut down too aggressively.

Even without that, it's quite possible that the remote peer has data in flight when we destroy(), in which case we'll send a RST on all platforms if that data arrives after destroy().

This will result in unnecessary data loss and connection errors for the remote peer, and if we can we should close things properly to avoid that.

As far as I can tell, #1908 looks great though, and does resolve everything without the larger much riskier changes from this PR. With that PR merged, AFAICT:

  • We always try to send a close frame when we get an error.
  • We then wait for either a close response, socket end, or error, and then cleanly shut down the socket.
  • We now always shut down the socket, so we only need the timeout if the remote peer does nothing at all after we send them an error close frame - ignoring the error close frame entirely and keeping the connection open. Even in that case, the socket still always gets destroyed after 30 seconds.
  • We never unnecessarily lose pending data in the remote peer (which could happen if we create a RST), and we don't unnecessarily lose pending outgoing data either.

We should not wait for all buffered data to be written, it might never happen given that the peer is malfunctioning, just try to send the close frame which might or might not be received.

I don't really mind if we do drop other pending outgoing data, if you want, by sending the close frame immediately instead of adding it to the queue. The spec doesn't define this either way that I can see, and it doesn't seem unreasonable if that makes our lives simpler.

I do think we should not destroy the socket immediately though, since everything explicitly says that we shouldn't because it can cause real problems, and #1908 means everything is now cleaned up properly regardless.

What are the reasons you'd prefer to destroy() immediately instead of doing a clean TCP shutdown? From my POV: the spec explicitly says we should wait, I don't see any substantial downside, and it avoids the risk of data loss of both already sent data and the error close frame.

I'm not sure browsers are good benchmarks for behaviour here by the way - imo server to client error feedback is usually much more important & useful than the opposite, so the tradeoffs for WS are quite different to those for a web browser. As a developer building a websocket API, I really want my API to tell bad clients what they're doing wrong, so they can change their broken implementations! As a user browsing the internet or a developer building an API client, I'm usually not that bothered.

And in general of course, just because Safari is poorly behaved doesn't mean we should be too 😃

@lpinca
Copy link
Member Author

lpinca commented Jun 28, 2021

What are the reasons you'd prefer to destroy() immediately instead of doing a clean TCP shutdown?

  1. We are in an error condition, we should be nice and try to send the close frame but not at the cost of keeping the connection open for up to 30s. The counter argument here is that the same can happen for a normal closure so perhaps this is more a timeout issue and we should change it to something shorter than 30 seconds.
  2. It is consistent with how Node.js core HTTP server handles parsing errors.
  3. It is consistent with browsers' behavior. They should also follow the specification even if tradeoffs are slightly different. A server might be equally interested in knowing why a client is closing the connection.

@lpinca
Copy link
Member Author

lpinca commented Jun 28, 2021

This was a fun exercise and a constructive discussion. Thanks to both of you.

@lpinca lpinca closed this Jun 28, 2021
@lpinca lpinca deleted the simplify/closing-handshake branch June 28, 2021 19:00
lpinca added a commit that referenced this pull request Jun 28, 2021
Ensure that `socket.end()` is called if an error occurs simultaneously
on both peers.

Refs: #1902
@lpinca lpinca mentioned this pull request Jul 13, 2021
lpinca added a commit that referenced this pull request Jul 13, 2021
Instead of abruptly closing all WebSocket connections, try to close them
cleanly using the 1001 status code.

If the HTTP/S server was created internally, then close it and emit the
`'close'` event when it closes. Otherwise, if client tracking is
enabled, then emit the `'close'` event when the number of connections
goes down to zero. Otherwise, emit it in the next tick.

Refs: #1902
lpinca added a commit that referenced this pull request Jul 13, 2021
Instead of abruptly closing all WebSocket connections, try to close them
cleanly using the 1001 status code.

If the HTTP/S server was created internally, then close it and emit the
`'close'` event when it closes. Otherwise, if client tracking is
enabled, then emit the `'close'` event when the number of connections
goes down to zero. Otherwise, emit it in the next tick.

Refs: #1902
lpinca added a commit that referenced this pull request Jul 14, 2021
When `WebSocketServer.prototype.close()` is called, stop accepting new
connections but do not close the existing ones.

If the HTTP/S server was created internally, then close it and emit the
`'close'` event when it closes. Otherwise, if client tracking is
enabled, then emit the `'close'` event when the number of connections
goes down to zero. Otherwise, emit it in the next tick.

Refs: #1902
lpinca added a commit that referenced this pull request Jul 14, 2021
When `WebSocketServer.prototype.close()` is called, stop accepting new
connections but do not close the existing ones.

If the HTTP/S server was created internally, then close it and emit the
`'close'` event when it closes. Otherwise, if client tracking is
enabled, then emit the `'close'` event when the number of connections
goes down to zero. Otherwise, emit it in the next tick.

Refs: #1902
lpinca added a commit that referenced this pull request Jul 14, 2021
When `WebSocketServer.prototype.close()` is called, stop accepting new
connections but do not close the existing ones.

If the HTTP/S server was created internally, then close it and emit the
`'close'` event when it closes. Otherwise, if client tracking is
enabled, then emit the `'close'` event when the number of connections
goes down to zero. Otherwise, emit it in the next tick.

Refs: #1902
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants