You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 8, 2024. It is now read-only.
Set up logging for unused codes in last 24 hours for any server issues (unless Brian gets the state stuff done before us) on the LA magic wormhole server
The text was updated successfully, but these errors were encountered:
I don't think we need this on the wormhole server, do we?
The "other end" of the wormhole can determine if a code was used successfully or not (i.e. either it timed out and we closed it, or something connected). We also would want a way for users of GridSync to say "it didn't work".
I believe this Issue is geared more towards having a durable on-disk record of pending invites so that the wormhole server state could be more easily restored in the event of a server crash or restart.
Oh, I see: so this feature is actually: the wormhole server currently nukes all pending connections if you restart it, so we'd like to be able to know if we actually lost any in-progress invites. Or at least see if there are any pending before we restart?
(But then that doesn't matter when wormhole can restart and continue pending connections).
The client library knows how to resume the protocol after a reconnection event,
assuming the client process itself continues to run.
That's good but by far the more common case for an interrupted connection to the wormhole server is due to the deployment being updated and our wormhole client being restarted. We'll have to implement our own persistence and reconnect logic to make it possible to survive this scenario.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Set up logging for unused codes in last 24 hours for any server issues (unless Brian gets the state stuff done before us) on the LA magic wormhole server
The text was updated successfully, but these errors were encountered: