Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expose size of the push manager's queue #20

Closed
danmassie opened this issue Nov 14, 2013 · 5 comments
Closed

Expose size of the push manager's queue #20

danmassie opened this issue Nov 14, 2013 · 5 comments
Milestone

Comments

@danmassie
Copy link

When adding messages asynchronously, the entire list of messages (~200k) is added to the queue almost immediately. Whilst this is ok just now, it would be nice to be able to choose to wait to enqueue more messages once the size of the queue had reduced to an acceptable size. Exposing the size of the queue would allow this to happen and would help prevent it growing too large, which could result in out of memory errors.

@flozano
Copy link

flozano commented Nov 14, 2013

I totally agree with this one 👍

I have a SQS queue already in-front of the apns push sending artifacts, and the current unbounded in-memory queue would mean draining it and having it all in memory.

@marcomorain
Copy link

You could use a Guava RateLimiter[1] to limit the rate of adding messages
to the Queue, and make something like this:

https://gist.github.com/marcomorain/7470870#file-ratelimitedpushmanager-java

[1]
http://docs.guava-libraries.googlecode.com/git/javadoc/com/google/common/util/concurrent/RateLimiter.html

On Thu, Nov 14, 2013 at 4:12 PM, Francisco A. Lozano <
notifications@github.com> wrote:

I totally agree with this one [image: 👍]

I have a SQS queue already in-front of the apns push sending artifacts,
and the full in-memory queue would mean draining it and having it all in
memory.


Reply to this email directly or view it on GitHubhttps://github.com//issues/20#issuecomment-28496782
.

@jchambers
Copy link
Owner

@danmassie Thanks kindly for both the suggestion and the pull request!

I'm hesitant on this one because the queue size can fluctuate quite a bit during normal operation. Here's a situation I worry about:

  1. Caller enqueues a couple thousand notifications.
  2. We write all of the notifications to the wire. Queue is now empty.
  3. Caller checks queue size and gets 0 as a result.
  4. The first notification is rejected by the APNs gateway. All subsequent messages are added back to the queue.
  5. The queue size is now "a couple thousand" - 1, but the caller still thinks it's empty.

Generally, I'm worried that the number of notifications in the queue isn't super meaningful until we've shut down (see #4), and just exposing the queue's size could lead to a false sense of certainty about the state of things. If the goal is to prevent memory usage from getting out of control, though, I wonder if there's something else we can try.

To be clear, though, I would do this in a heartbeat (and would have already done it) if we could be more certain about the fate of non-rejected notifications.

@danmassie
Copy link
Author

I think that exposing the size of the queue as an indicator of the size is better than nothing at all. I appreciate that it can change, but that is the nature of the process.

@jchambers
Copy link
Owner

So I've been thinking about this and have a plan that I like a bit better than exposing the internal queue size.

Right now, we have one queue that does everything. New messages from the outside go into that queue, and so do messages that need to be resent. What I'd like to do is break that into two queues: an "interacts with the outside world" queue and a retry queue. The former -- the queue that accepts messages from the outside -- could optionally be provided by callers at construction time. If you wanted to, you could provide a bounded queue implementation.

The plan would be to make sure the retry queue is empty before taking new messages from the public-facing queue. That would mean you could be reasonably confident that things were moving forward if the public-facing queue started to drain. You could monitor the public queue and even alter its contents without fear of bad things happening to Pushy's internals. I think that would also help organize things in such a way that we could consider some robust solutions to #24, too.

Sound like that would solve your problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants