You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After the invite_1_claimer_wait_peer command and until the greeter peer connects, the server doesn't disconnect when received WS close frame. This breaks RC6455, the server must send a close frame in response, and immediately close the underlying TCP connection.
It hurts some websocket clients, when the user cancels the claim process at the greeter waiting stage, the application freezes, until a internal timeout after 30 seconds and actually close the connection at the client side.
This is also an issue with the "async" model of the Parsec protocol. The server, in this case retains any response and sends them all with the greeter_public_key message when the greeter connects. The main issue effect is when the claimer cancels and disconnects, as the disconnection is not enforced by the server as it should.
See in the following sequence from a client (green arrow = client sends a message, red arrow = server sends a message back) :
The server needs to send the claimer_wait_peer response with the greeter_public_key in order to respond to any request, including ping and WS disconnect frame.
16:58':35" : client initiates a claim by sending an invite_1_claimer_wait_peer message
That triggers a blocking in the server
16:59':03" : client send a routine ping keepmealive message
16:59':30" : clients sends the next routine ping
16:59':50" : greeter connects and starts the greeting. The server sends the greeter_public_key
It also immediately sends the pong responses. It means there were stuck in the queue, and the server was stuck. We noticed it also ignores any WS disconnect frames, which is the main UX issue.
The text was updated successfully, but these errors were encountered:
The issue is currently the backend doesn't keep listening on the client connection once it is dealing with a command (the idea is we consider a command fast enough to process that it's fine to wait for it completion before dealing with the next connection event)
As you discovered, the issue arrived when the command is long lived, which is the case for invite_1_claimer_wait_peer.
The solution would be to use the run_with_breathing_transport function which allow to run the command while still listening to the client connection at the same time. This is what is used for the event_listen command so the client can cancel the listen at any time.
After the invite_1_claimer_wait_peer command and until the greeter peer connects, the server doesn't disconnect when received WS close frame. This breaks RC6455, the server must send a close frame in response, and immediately close the underlying TCP connection.
It hurts some websocket clients, when the user cancels the claim process at the greeter waiting stage, the application freezes, until a internal timeout after 30 seconds and actually close the connection at the client side.
This is also an issue with the "async" model of the Parsec protocol. The server, in this case retains any response and sends them all with the greeter_public_key message when the greeter connects. The main issue effect is when the claimer cancels and disconnects, as the disconnection is not enforced by the server as it should.
See in the following sequence from a client (green arrow = client sends a message, red arrow = server sends a message back) :
The server needs to send the claimer_wait_peer response with the greeter_public_key in order to respond to any request, including ping and WS disconnect frame.
16:58':35" : client initiates a claim by sending an invite_1_claimer_wait_peer message
That triggers a blocking in the server
16:59':03" : client send a routine ping keepmealive message
16:59':30" : clients sends the next routine ping
16:59':50" : greeter connects and starts the greeting. The server sends the greeter_public_key
It also immediately sends the pong responses. It means there were stuck in the queue, and the server was stuck. We noticed it also ignores any WS disconnect frames, which is the main UX issue.
The text was updated successfully, but these errors were encountered: