Remote Signals #63

wants to merge 1 commit into


None yet
4 participants

dei79 commented Aug 12, 2012

I added the option to send a signal to the daemon directly out of the rails application without the need to know where the daemon is running. For this this new model RemoteSignal was introduced. Just send a hup signal with the following code:

Rapns::RemoteSignal.push :key => :hup

Added support for remote signal so it's possible to notify the daemon…
… for reloading the application directly in your rails app. Just use

Rapns::RemoteSignal.push :key => :hup

This pull request fails (merged 0e4436d into 7dbbecf).


ileitch commented Aug 14, 2012

I'm not sure I like this. Why not just send a HUP signal over SSH?

dei79 commented Aug 14, 2012

The key question for me is how to know on which server and on which process id. If I want to send the HUP signal via SSH in need to build the following pieces:

  • Store the server and pid somewhere in the database (the server who is hosting the rapns daemon can change from time to time)
  • Every application server needs have access to the daemon server (deploying ssh keys)

After that I can send the HUP via ssh.

Did I missed something?

dei79 commented Aug 15, 2012

So any other better options to realize my requirement?


ileitch commented Aug 16, 2012

If you're using Capistrano you can easily distribute a command to all servers. I agree that it'd be nice for rapns to make this easier, I'm just not sure I like this approach. I'll have a think about it.

dei79 commented Aug 16, 2012

I'm open to implement an other way which fits better to your design goals. I also would like to see a way where we have an active flag at the application and when ever someone is storing or updating the database the daemon will be refreshed. If we follow this solution we need to poll against the applications.

Did you thought about using resque instead of polling daemon. As I understood we can write a worker process which holds the connections and receives the messages from the redis database. If it makes things to complex forget this idea :-)


mattconnolly commented Sep 18, 2012

At the moment, I'm looking at extending rapns to use redis for a queue instead of polling the database. I figure that once a notification was saved in the database, you could simply push its id into a redis list and have the rapns server process doing a blocked-pop off the list.

In terms of making the configuration portable, I wonder about storing all of the rapns setup in a rails initialiser in ./config/initializers/rapns.rb for example (like devise, etc).

Of course, using redis would be optional, and without it the polling behavior could still function. Also, I thought that postgres had some kind of blocking queue functionality.. I've never used postgres before, but that could be another option.


dei79 commented Sep 18, 2012

I would really like to see a redis/resque implementation. This would fit much better to my deployment concept. Let me know when you start working on it, I can contribute :-)


mattconnolly commented Sep 18, 2012

Sounds great. So, we could have redis queues for:

  • notifications
  • signals

Anything else?

dei79 commented Sep 19, 2012

I would like to see a notifications queue per application, e.g.


So the worker can take the notifications in parallel and not serialized. In addition to that I would like to see a feature which detects new applications in the worker automatically and can handle them. The connection can be established when the first message appears but should be hold open over the lifetime of the worker


ileitch commented Sep 19, 2012

I would like to add redis support, currently I am concentrating on support for Google Cloud Messaging.

  • PostgreSQL listen/notify cannot be used because rapns needs to handle notifications that have a deferred delivery time.
  • Per app queues are not needed, rapns already does this internally. The throughput of an app does not generally affect the delivery latency of other apps.

mattconnolly commented Sep 19, 2012

I agree. In most cases, the requests would ultimately be serialised over a network interface, which would also be more limited on the connection to the APN servers than pulling them from redis. Plus this would keep the redis implementation more inline with the existing implementation for users who don't have/don't want redis.

Triggering the deferred delivery notifications would still require some extra work for redis, just the same as postgres. I'll have a look around and see if there are any scheduling gems that we can add use when using redis.

dei79 commented Mar 23, 2013

I want to warm up the HUP discussion. Currently I'm at a point where I want to integrate the latest version of RAPNS and I'm struggling with sending the HUP. I have the following use case:

  • Customer registers on page
  • Customer generates an APN certificate and uploads this certificate
    • System should generate the Rapns::App (solved 👍 )
    • System should send the HUP signal to my daemon (how ?)

Currently I have a bunch of application servers and I can't say on which server the end user is doing the upload process. My application servers are also totally independent from worker servers and everything is loosly coupled with queues like Resque.

At the end I don't want to give the application servers knowledge about the worker servers. What is the best way to transport the HUP signal to the daemon? I see one option

  • Building a Resque Job who sends the HUP
  • The Reswue worker is just running on the same server as the daemon is running
  • Introducing an API for this in the app

Would be great to get this part of the standard GEM 👍

What is your opinion?

dei79 commented Mar 23, 2013

I just created a resque worker where the worker is just running on the machine where the rapns daemon is running. This works fine for me:

class RapnsHupWorker
  @queue = :rapns

  def self.perform
    Rails.logger.tagged("Resque;Rapns HUP Worker") {
      if !rapns_running?"RAPNS is not running, nothing todo")
      end"Using RAPNS pid file: #{self.rapns_pid_file}")"RAPNS pid is: #{self.rapns_pid}")"Sending HUP signal")"Calling: kill -HUP #{self.rapns_pid}")
      system("kill -HUP #{self.rapns_pid}")

  def self.rapns_pid_file

  def self.rapns_pid

  def self.rapns_running?

  def self.signal

ileitch commented Mar 26, 2013

Your Resque approach isn't a bad idea, though I'm not sure it I can package this approach out-of-the-box as it requires setup which is beyond the scope of Rapns.

Would a simple HTTP API work for you? Your app servers would just hit a sync endpoint. This requires your app servers to know where to talk to your worker servers, though you could decrease the coupling by setting up a host for Rapns, i.e rapns.internal.tld:1234.


mattconnolly commented Mar 26, 2013

I think it's a great example to add to the wiki. Not everyone will have the same configuration or be using Resque.

Also using system("kill ...") assumes that kill is in the PATH of the current environment. Is this a reliable assumption for Resque workers?

dei79 commented Mar 26, 2013

@ileitch HTTP API is not my favorite because I want to prevent that my app servers need to know some infrastructure information of the backend services. This helps when I reorganize my infrastructure a lot.

@mattconnolly I will prepare a wiki page and add a pull request to this project. Kill in the path should be no problem on Linux systems. If you have ideas for windows let me know.

@ileitch ileitch referenced this pull request in rpush/rpush Feb 3, 2014


Remote Signals #2

@ileitch ileitch closed this Nov 8, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment