Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Connection Pool refactor #274

Merged
merged 4 commits into from
Feb 22, 2013
Merged

Connection Pool refactor #274

merged 4 commits into from
Feb 22, 2013

Conversation

brianc
Copy link
Owner

@brianc brianc commented Feb 20, 2013

Addressing #227 #224 #22 #154 and #137

This is bulk of the pool refactor. It is still a work in progress. The old API will remain intact but officially deprecated. The new api is as follows:

pg.connect(/*optional connection parameters*/, function(err, client, done) {
  setTimeout(function() {
    done(); //returns client to the pool regardless of whether or not client was used at all
  }, 1000);
});

//you can use the client as much as you like, drain it multiple times, etc.  The release is totally up to you
pg.connect(function(err, client, done) {
  client.query('BEGIN', function(err, res) {
    //calling done with an error will destroy the client & remove it from the pool
    if(err) return done(err);
    //allow the client to emit 'drain' because it don't even matter
    process.nextTick(function() {
      client.query('COMMIT', done);
    });
  });
});

The api also allows you to force the pool to destroy and remove a client by passing an instance of Error (or anything truthy, actually) to the done() callback - I modeled this off of the async testing api in https://github.com/visionmedia/mocha

pg.connect(function(err, client, done) {
  //force the client pool to destroy the client
  done(new Error("SOMETHING BAD HAPPEND"));
});

There are a few edge cases I tried to test more closely as well, including the following: note: never do this IRL

pg.connect(function(err, client, done) {
  done(); //return the client to the pool immediately
  setTimeout(function() {
    //the client will emit an error while sitting idle in the pool
    client.emit('error', new Error("I GOT DISCONNECTED FROM THE DATABASE IN THE BACKGROUND!!"));
  }, 1000);
});

When a client emits an error in the background the root pg object will emit the error and more importantly the client will be destroyed and removed from the pool. The root pg object listens to the pool error and re-emits it, so you don't have to handle the error on a pool by pool basis. This should cover cases where postgres closes or fails in the background, all the clients will disconnect with errors and be removed from the pool. There is still work to do with failover & heartbeats and stuff however.


I've exposed the entire pool area hanging off of the root pg object as pg.pools. They are referenced by the JSON.stringify(connectionstring/connectionparameters/{}) so...this:

pg.connect(function(err, client, done) {

});
///equals
pg.pools.all[JSON.stringify({})].createOrGet({}, function(err, client, done) {

});

All of the pools are exposed at pg.pools.all in a hash. This will allow you to use the acquire and release methods directly if the pg.connect method doesn't suit your needs, supply a completely custom pool implementation, or do other helpful things like shutdown/destroy a particular pool. Ideally another module should be use for pooling, but at the very least this fixes the build in pool to not completely break and "leak" in strange gotcha scenarios.

I'm looking for feedback & collaboration on this, so don't be shy!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant