Skip to content


Subversion checkout URL

You can clone with
Download ZIP


provide "safe" shutdown API ? #3

perezd opened this Issue · 9 comments

2 participants


first of all, this project is really well done, thank you for contributing this to open source!

I'm using Gearnode in a situation where the worker nodes may need to be restarted in a frequent matter (ie: code deploys, maintenance, etc..) and I wanted to brainstorm additions to this library that would allow the developer to process jobs and hang up on the job server once completed, for safe termination.

I see that you've got closeConnection/close methods in the GearmanConnection object, but it might be interesting to add something to the GearmanJob API that allows you to say "here is my response, and hangup after you've received it, so I can no longer be considered a receiver of work, and die peacefully."

I could fork and try out some ideas, what is your take on the matter?


I'm glad you like it!

closeConnection only closes the socket, it doesn't send anything to the server - and by the gearman spec isn't supposed to, it is up to the server to trace all the existing workers.

There is actually an API method for a hangup - removeServer - which in turn calls the closeConnection but I just realized it is undocumented and doesn't have a test so I'm not sure does it even work.

var gearman = new Gearnode();
gearman.removeServer("localhost"); // hangup

Anyhow if you're willing to commit something to make the library better then this would be very cool (Y)


ideally, you'd want to flip a bit that signals to the worker that after whatever you are working on is sent back to the job server, you intend to disconnect, and therefore no longer receive new job requests from the job server. In particular, I am interested in "proxying" a shutdown of an OS process, similar to this:

var gearman = new Gearnode();

gearman.addFunction("doStuff", function(payload,job){ 
    job.complete("some response"); // once this job completes, consider closing the connection.

// closes after the next job is processed, fails pending jobs back to the server.
process.on("SIGQUIT", gearman.closeSoon);

looking closer, what if closeSoon mutated an internal properly that was monitored within the processQueue loop?

GearmanConnection.prototype.processQueue = function(){
    var command;

    // if no connection yet, open one
        return this.connect();

    // get commands as FIFO
        this.processing = true;
        command = this.command_queue.shift();
        process.nextTick(this.runCommand.bind(this, command));
        this.processing = false;
        if (this.shouldClose) {



Actually, it makes more sense in the worker command for handler_NO_JOB

GearmanConnection.prototype.handler_NO_JOB = function(command){
    if (this.should_close == true)

before it tells the server its asleep, close, if you've signaled you should close.

Signaling could look like this:

Gearman.prototype.closeSoon = function() {
  for(var i=this.server_names.length-1; i>=0; i--) {
    var connection;
    connection = this.servers[this.server_names[i]].connection;
    connection.should_close = true;

Wow, so none of my guesses we're right. I'll do a proper fork and get back to you. I think I see what to do now.


Ok, let me know if you have something

@perezd perezd closed this
@perezd perezd reopened this

Where would you recommend I look to implement some sort of signal? I've been messing with it, but not sure I am doing it right :(


I'm not actually sure this library properly fails over to servers that are added via addServer, it doesn't seem to act right if it cannot communicate to a server you've specified via addServer, never moves onto the alternate one, any thoughts on that?

oh, it appears you can't have two servers running on the same hostname. got it..


yeah, the latter one is a fail in design by my behalf since it turned out connecting to the same server multiple time can be really needed beahvior.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.