New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Persistent connections #14
Closed
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…ctional as of now. What's changed: - PushNotification.Identifier is now an uint32; it means we can use the whole identifier range. This also means that IDENTIFIER_UBOUND is no longer needed. - A new Response type is defined which maps directly to the apple response What's added: - There's now a Connection, which is backed by a simple system of goroutines connected by queues. The way it will work is this: A sender goroutine runs through the push notification queue, (connects to APNS if the conection is down) and passes the notifications on to the sent queue. A reader goroutine blocks waiting for a response from APNS and passes it on and closes the connection. A 'limbo' goroutine keeps a queue of sent notifications, and discards / requeues notifications when it receives responses. It also sends "success" responses to itself every now and then, to flush out the notifications which have been around longer than TimeoutSeconds. Things I still need to implement: - Actually connecting to APNS - Actually sending notifications - The sender should probably exponentially backoff if it can't reconnect (up to some limit) - Documentation - Tests - Probably lots more things Things I still need to think about: - The reader goroutine might need to be spawned each time the connection is started rather than running on (because what happens when the connection is down? (it can't connect itself, that's the sender's responsibility) - Now that the sender routine is responsible for reconnecting itself each time it disconnects, the user doesn't have direct access to the state of the connection. That is to say, if for some reason the sender is unable to reconnect for an extended period of time (say, in a network outage) the notifications will just pile up and you won't notice right away that they're not being sent. On the one hand we could say "this is really bad" and add another error queue for connection errors; but how can the user reasonably deal with them anyway? Since avoidable errors (bad configuration, etc) would probably be caught in Start() we can probably just log'em and hope? - Graceful shutdown feels complicated because of the channels going in a loop. If a goroutine doesn't know its input stream is finished, it can't close its downstream channels...
ghost
added
2 - Working
labels
May 4, 2014
the `limbo` routine; and a little more logging for debugging purposes.
…push notifications with the same ID were being generated. In the future though, maybe we should look into using sequential IDs rather than random ones.
So I've realised that organising the sent notifications just using channels is not sufficient / flat out wrongly thought out. Next step is to record a timestamp with the sent notifications. |
…lice; this enables correct handling of timeouts. Now instead of generating success responses and sending them to ourselves (which was broken and a dumb idea in the first place), we just record the time each notification was sent and scan through the list of sent notifications every so-often to discard the old(er than TimeoutSeconds) ones.
Closing this - I've not given this library any thought in years and have no intention to do so :) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is the outline of how I was thinking of implementing #6.
Initial main structure of a persistent connection; completely non-functional as of now.
What's changed:
What's added:
A sender goroutine runs through the push notification queue, (connects to APNS if the conection is down), writes the notifications, and passes the notifications on to the sent queue.
A reader goroutine blocks waiting for a response from APNS, passes it on and closes the connection.
A 'limbo' goroutine keeps a queue of sent notifications, and discards / requeues notifications when it receives responses.
It also sends "success" responses to itself every now and then, to flush out the notifications which have been around longer than TimeoutSeconds.That was a bad idea, now it just checks the backlog every second for notifications that are too old.Things I still need to do:
Things I still need to think about:
The reader goroutine might need to be spawned each time the connection is started rather than running on (because what happens when the connection is down?(it can't connect itself, that's the sender's responsibility)
It's now started within connect() each time.
That is to say, if for some reason the sender is unable to reconnect for an extended period of time (say, in a network outage)
the notifications will just pile up and you won't notice right away that they're not being sent.
On the one hand we could say "this is really bad" and add another error queue for connection errors; but how can the user reasonably deal with them anyway?
Since avoidable errors (bad configuration, etc) would probably be caught in Start() we can probably just log'em and hope?
Now everything is mostly functional apart from two main issues:
There's a potential race condition / synchronisation issue with sender() and reader() accessing the same tls.ConnI think that's fixed now