Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Persistent connections #14

Closed
wants to merge 23 commits into from
Closed

Conversation

draaglom
Copy link
Collaborator

This is the outline of how I was thinking of implementing #6.

Initial main structure of a persistent connection; completely non-functional as of now.

What's changed:

  • PushNotification.Identifier is now an uint32; it means we can use the whole identifier range. This also means that IDENTIFIER_UBOUND is no longer needed.
  • A new Response type is defined which maps directly to the apple response

What's added:

  • There's now a Connection, which is backed by a simple system of goroutines connected by queues. The way it will work is this:

A sender goroutine runs through the push notification queue, (connects to APNS if the conection is down), writes the notifications, and passes the notifications on to the sent queue.

A reader goroutine blocks waiting for a response from APNS, passes it on and closes the connection.

A 'limbo' goroutine keeps a queue of sent notifications, and discards / requeues notifications when it receives responses.

It also sends "success" responses to itself every now and then, to flush out the notifications which have been around longer than TimeoutSeconds. That was a bad idea, now it just checks the backlog every second for notifications that are too old.

Things I still need to do:

  • Actually connecting to APNS
  • Actually sending notifications
  • The sender should probably exponentially backoff if it can't reconnect (up to some limit)
  • Resolve sender() / reader() race condition
  • Reconnecting correctly after a bad push notification
  • Graceful shutdown
  • Strip out excessive logging
  • Documentation
  • Tests
  • Probably lots more things

Things I still need to think about:

  • The reader goroutine might need to be spawned each time the connection is started rather than running on (because what happens when the connection is down?
    (it can't connect itself, that's the sender's responsibility)

    It's now started within connect() each time.
  • Now that the sender routine is responsible for reconnecting itself each time it disconnects, the user doesn't have direct access to the state of the connection.
    That is to say, if for some reason the sender is unable to reconnect for an extended period of time (say, in a network outage)
    the notifications will just pile up and you won't notice right away that they're not being sent.
    On the one hand we could say "this is really bad" and add another error queue for connection errors; but how can the user reasonably deal with them anyway?
    Since avoidable errors (bad configuration, etc) would probably be caught in Start() we can probably just log'em and hope?
  • Graceful shutdown feels complicated because of the channels going in a loop. If a goroutine doesn't know its input stream is finished, it can't close its downstream channels...

Now everything is mostly functional apart from two main issues:

  • If you send a whole bunch of notifications at once, maybe not all of them arrive. I need to verify whether:
    • a) it's because sending >30 notifications in a second to a single device is bad behaviour and being throttled or whatever
    • b) something is wrong with the code and handling / reconnecting after bad notifications
  • There's a potential race condition / synchronisation issue with sender() and reader() accessing the same tls.Conn I think that's fixed now

Patrick Molgaard added 10 commits May 1, 2014 14:16
…ctional as of now.

What's changed:
- PushNotification.Identifier is now an uint32; it means we can use the whole identifier range. This also means that IDENTIFIER_UBOUND is no longer needed.
- A new Response type is defined which maps directly to the apple response

What's added:
- There's now a Connection, which is backed by a simple system of goroutines connected by queues. The way it will work is this:
A sender goroutine runs through the push notification queue, (connects to APNS if the conection is down) and passes the notifications on to the sent queue.
A reader goroutine blocks waiting for a response from APNS and passes it on and closes the connection.
A 'limbo' goroutine keeps a queue of sent notifications, and discards / requeues notifications when it receives responses.
It also sends "success" responses to itself every now and then, to flush out the notifications which have been around longer than TimeoutSeconds.

Things I still need to implement:
- Actually connecting to APNS
- Actually sending notifications
- The sender should probably exponentially backoff if it can't reconnect (up to some limit)
- Documentation
- Tests
- Probably lots more things

Things I still need to think about:
- The reader goroutine might need to be spawned each time the connection is started rather than running on (because what happens when the connection is down?
  (it can't connect itself, that's the sender's responsibility)
- Now that the sender routine is responsible for reconnecting itself each time it disconnects, the user doesn't have direct access to the state of the connection.
  That is to say, if for some reason the sender is unable to reconnect for an extended period of time (say, in a network outage)
  the notifications will just pile up and you won't notice right away that they're not being sent.
  On the one hand we could say "this is really bad" and add another error queue for connection errors; but how can the user reasonably deal with them anyway?
  Since avoidable errors (bad configuration, etc) would probably be caught in Start() we can probably just log'em and hope?
- Graceful shutdown feels complicated because of the channels going in a loop. If a goroutine doesn't know its input stream is finished, it can't close its downstream channels...
@ghost ghost added 2 - Working labels May 4, 2014
Patrick Molgaard added 3 commits August 21, 2014 19:19
the `limbo` routine; and a little more logging for debugging purposes.
…push notifications with the same ID were being generated.

In the future though, maybe we should look into using sequential IDs rather than random ones.
@draaglom
Copy link
Collaborator Author

draaglom commented Sep 5, 2014

So I've realised that organising the sent notifications just using channels is not sufficient / flat out wrongly thought out. Next step is to record a timestamp with the sent notifications.

Patrick Molgaard added 4 commits September 16, 2014 17:15
…lice; this enables correct handling of timeouts.

Now instead of generating success responses and sending them to ourselves (which was broken and a dumb idea in the first place),
we just record the time each notification was sent and scan through the list of sent notifications every so-often to discard the old(er than TimeoutSeconds) ones.
@draaglom
Copy link
Collaborator Author

Closing this - I've not given this library any thought in years and have no intention to do so :)

@draaglom draaglom closed this Jun 12, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant