Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-process deployment #170

Open
mikermcneil opened this issue Feb 26, 2013 · 38 comments
Open

Multi-process deployment #170

mikermcneil opened this issue Feb 26, 2013 · 38 comments

Comments

@mikermcneil
Copy link
Member

Use cluster or fleet to allow for multi-instance deployment

@ghost ghost assigned techpines Mar 1, 2013
@dcbartlett
Copy link
Contributor

I coin this Armada

@mikermcneil
Copy link
Member Author

Love it

On Fri, Mar 8, 2013 at 1:59 PM, Dennis Bartlett notifications@github.comwrote:

I coin this Armada


Reply to this email directly or view it on GitHubhttps://github.com//issues/170#issuecomment-14641747
.

Mike McNeil
Founder
http://www.linkedin.com/in/mikermcneil/ http://twitter.com/mikermcneil
http://github.com/balderdashy

   C      O
  NFI    DEN
  TIA   L i
  nfo  rma
  tion in
   tended
only for t      he addressee(s).

If you are not the intended recipient, empl
oyee or agent responsible for delivery to the
intended recipient(s), please be aware that
any review, dissemination, use,distribut
ion or copying of this message and its
contents is strictly prohibited. If
you receive this email in error, ple
ase notify the sender and destroy any
paper or electronic copies immediately.

@dcbartlett
Copy link
Contributor

This will need to be planned out before attempted to be implemented. We need to figure out the following:

Master - Node relationship and communication.
How to handle sails lifting and lowering as needed for userload.
(don't want to waste resources we don't need now do we?)
Should there be a manual controll system on the master? all nodes?
If a master/node goes offline, how will it be notifies throughout the cluster?
Will it be able to assign a new Master in the event of loosing one.

I'm sure there are more things to think about here.

@ghost ghost assigned dcbartlett and techpines Mar 15, 2013
@techpines
Copy link
Member

Hmm.

I thought we were talking about cluster support, like this http://nodejs.org/api/cluster.html

So that you could run sails easier on multi-core machines.

Integrating fleet into sails does not make a lot sense to me, but maybe I'm wrong. Fleet is a devops deploy tool that should more or less work with sails out of the box.

@techpines
Copy link
Member

As long as people aren't maintaining state in their sails applications, scaling won't take much more work than sticking your nodes behind a load balancer.

Just like scaling any other node application.

@mikermcneil
Copy link
Member Author

@techpines Good point-- in that case, let's assume we're deploying on single-core instances behind a load balancer and shift our focus to:

  1. Session store adapter #171 allowing for easy configuration of the underlying session server to use redis (for now we can assume redis) We don't have to use adapters yet to keep it simple. Default is still in-memory. sails.config.session contains the config object right now and allows you to hook up express. It is not, however, set up to also control Socket.io's session. Would be best for now to bundle connect-redis in Sails.

We'd be replacing the current config:

module.exports.session = {
  secret: 'k3yboardKKAT', // the optional session secret 
  store: {}, // the connect session store object
  key: "sails.sid"   // the cookie key
}

(remove store option)

With:

module.exports.session = {
  secret: '', // the optional session secret
  key: '', // cookie key
  adapter: 'redis',  // or can be 'memory'
  db: '', // required, database index to use
  pass: '', // required, password for redis authentication
  host: '',  // optional, defaults to localhost
  port: 6379, // optional, defaults to 6379
  ttl:  500  // optional, in seconds, defaults to whatever connect-redis defaults to
};

We wouldn't actually be using the redis adapter yet, but at least this way the API won't change when we do
2) #172 allowing for easy configuration of the underlying socket server to use redis (for now we can assume redis) We don't have to use adapters yet to keep it simple. Default is still in-memory. Same config options as for session above.

@mikermcneil
Copy link
Member Author

One last thing- am I understanding right that for #172 that once we configure socket.io to do this, it can still sit behind a load balancer and be ok? the ws:// requests will be load balanced as well right?

@mikermcneil
Copy link
Member Author

Ak, one more thing, in this setup, where is the SSL certificate dealt with- in the load balancer? Or in each node.js server instance? This is relevant for https:// and wss:// requests. Thanks -mm

@techpines
Copy link
Member

Yea we can get sessions and web sockets going first.

As for load balancers, I think nginx or a native nodejs balancer could support web sockets.

@mikermcneil
Copy link
Member Author

Awesome- I'll check with engine and see how they've been LBing their Sails
cluster

On Fri, Mar 15, 2013 at 4:49 PM, Brad Carleton notifications@github.comwrote:

Yea we can get sessions and web sockets going first.

As for load balancers, I think nginx or a native nodejs balancer could
support web sockets.


Reply to this email directly or view it on GitHubhttps://github.com//issues/170#issuecomment-14986849
.

Mike McNeil
Founder
http://www.linkedin.com/in/mikermcneil/ http://twitter.com/mikermcneil
http://github.com/balderdashy

   C      O
  NFI    DEN
  TIA   L i
  nfo  rma
  tion in
   tended
only for t      he addressee(s).

If you are not the intended recipient, empl
oyee or agent responsible for delivery to the
intended recipient(s), please be aware that
any review, dissemination, use,distribut
ion or copying of this message and its
contents is strictly prohibited. If
you receive this email in error, ple
ase notify the sender and destroy any
paper or electronic copies immediately.

@mikermcneil
Copy link
Member Author

Best practice for scalability at the moment is putting instances behind a load balancer. For utmost efficiency, you'd want to provision single-core compute units (using a service like Modulus or Nodejitsu), but you may want to use Joyent, EC2, or Rackspace servers-- it's just that getting more cores isn't going to be offering any additional benefits.

Then, pull out shared memory state (your session and socket store) into Redis, make sure you're using a scalable solution for your main app database, and you should be good to go from a data side. As far as assets, you'll want to push gzipped images & minified CSS and JS out into a content delivery network like Cloudfront.

For help with that, check back here or in the #sailsjs channel on freenode (http://webchat.freenode.net/)

@techpines
Copy link
Member

@mikermcneil Session stuff is done. Do you have a syntax for redis socket.io config?

@techpines
Copy link
Member

Something like:

sockets: {
    adpater: 'memory'
}

@mikermcneil
Copy link
Member Author

@techpines that makes sense to me

For this and the session store, we'll just want to make sure and generate the default config in new projects with a note about the possible options (since we're not actually using the actual adapters yet)

Thanks!

@techpines
Copy link
Member

Hey I just pushed up the redis changes. I had it working on a trivial example with sessions and pub/sub working simultaneously.

@mikermcneil
Copy link
Member Author

Awesome, can't wait to take a look.

@mikermcneil
Copy link
Member Author

Pushing this off to "Someday" since the n-single-process-instances-behind-a-load-balancer approach works as a scalability approach for the immediate term, especially coupled with a Node.js-oriented PaaS provider like Nodejitsu or Modulus.

@mikermcneil
Copy link
Member Author

Full Redis session and MQ support is in as of 2226824

and will be released as part of 0.9. If you can't wait, it's in the master branch!

@acornejo
Copy link

acornejo commented Oct 3, 2013

See http://rowanmanning.com/posts/node-cluster-and-express/ for an example of how to get an express app running with cluster (its actually quite easy)

I wanted to migrate a project from express to sails, but I would hate to sacrifice multi-core performance.

Since getting express working with cluster is very straightforward, and since sailsjs is based on express, would it be a leap to say its easy to get sailsjs to work with cluster? I am asking because I am complete new to sails (i.e. I've only started looking into it today).

@uhho
Copy link
Contributor

uhho commented Oct 4, 2013

@acornejo How about to try using nginx? There is a similar discussion in #849. I wrote a short explanation how to deploy an app on multi-process / multi-machine environment and I publish sample nginx configuration.

Hope that helps.

@ansdma
Copy link

ansdma commented Oct 9, 2013

Thought of sharing what I do to use multiple cores with my Sails app. I simply run my app using pm2 with the switch -i max which does all the job for me :)

@pmalek
Copy link

pmalek commented Jan 29, 2014

I have been looking for some way to use sails with clustering but I couldn't find it and @alghamdi 's way using pm2 solved my issue for now.

@mikedevita
Copy link
Contributor

agreed pm2 seems to be the solution for now.

@mikermcneil
Copy link
Member Author

@alghamdi @pmalek @mikedevita 👍

@seti123
Copy link

seti123 commented Apr 1, 2014

Any intension to use cluster/master when instancing Express? I played around with pm2 and finally pm2 was not able to manage the processes and produced errors, at the end the whole pm2 utility was not working anymore, don't know what went wrong.

I think sails could simply take a config to setup the number of workers and maybe the session store as discussed before. It would be much easier than setting up NGINX, setting up multiple startup scripts for all sails instances etc. - so the deployment of such systems get a lot more complicated.

@ansdma
Copy link

ansdma commented Apr 1, 2014

@seti123 pm2 start app.js -i max -e err.log -o out.log does everything for me. WIth node.js development in general there's actually no escape from maintaining configs and scripts, e.g. nginx, monit, pm2/forever, upstart and much more ;)

@seti123
Copy link

seti123 commented Apr 1, 2014

Are you not using "sails lift" ? When I use node app.js with sails I get "app is not defined"
Whats the best way to start sails.js with pm2? Pls. note we set a lot of environment variables in the startup shell script as well.

I know (we got already larger configs running 5 Node Apps for one System, with automatic setupscripts with ~ 200 lines ...) - and I try to keep it small, all other node apps we use cluster but in sails it is not working like in our other Apps. I just want to remark that pm2 made chaos on my server (spreading many processes like in a loop, and finally inconsistant state like getting errors that it can' find the process ID's state, only a reboot did help). But this issue should not be discussed here.

@ansdma
Copy link

ansdma commented Apr 1, 2014

@seti123 sails.js does have app.js check Getting your app on the server

~Oh.. tell me about deploying multiple node.js's..

@Globegitter
Copy link
Member

@alghamdi @mikermcneil @pmalek @mikedevita I am trying to run sails with pm2 (pm2 start app.js -i max -- --prod) on node 0.11 in cluster mode (which is the recommended way for pm2) and it starts fine, but when you actually try to use the app we are getting access errors about the .tmp folder.
Maybe one if you has run into that before?
@mikermcneil Could that be, because there is now two sails processes running, both trying to access/write stuff in the same .tmp folder?
If so, would there be an easy way to change that?

@ansdma
Copy link

ansdma commented Sep 9, 2014

@Globegitter I've never run into this before but perhaps is just a permission issue. Try removing .tmp directory and leave it to sails/grunt/pm2 to create it for you.

@qunxyz
Copy link

qunxyz commented Mar 12, 2015

maybe could try these code in hacking app.js and then start your site use command "forever start app.js":
process.chdir(__dirname);
var cluster = require('cluster');
var os = require('os');
......
if (cluster.isMaster)
for (var i = 0, n = os.cpus().length; i < n; i += 1)
cluster.fork();
else
// Start server
sails.lift(rc('sails'));
})();

@juanpasolano
Copy link

My app is running fine with pm2 module. The problem now is that since I am making cross domain requests with CSRF validation, only one of the clusters has the right access.
How do you use the pm2 with CSRF?

@ansdma
Copy link

ansdma commented May 5, 2015

@juanpasolano maybe storing sessions in Redis or MongoDB could help.

@juanpasolano
Copy link

@alghamdi Thanks, I actually did this but with mongo, which I didn't know was so simple with sails.
I posted the question in Stack Overflow where I got a nice answer that pointed me to using a DB for sessions.
http://stackoverflow.com/questions/29702005/sailsjs-using-pm2-cluster-mode-and-csrf

@tarunjadhwani
Copy link

Is it possible to use nodejs clusters in sailsjs.
I'm using Supervisor and not PM2, but want to start the app in cluster mode?

@mikermcneil mikermcneil reopened this Nov 22, 2019
@mikermcneil
Copy link
Member Author

@rachaelshaw we should have sailsbot reopen closed issues when folks comment (either that or lock them like has become vogue in other projects recently. I like the reopen approach though)

@tarunjadhwani this is possible! (And recommended.) My experience has been that PaaS solutions like Heroku do this automatically in most cases, but if you're doing something more custom or bare metal, you'll want to set it up yourself. (Check out app.js)

@alxndrsn
Copy link

alxndrsn commented Dec 2, 2019

this is possible! (And recommended.)

@mikermcneil is there an example implementation of this? Or are you just suggesting changing app.js from:

// Start server
sails.lift(rc('sails'));

to e.g.:

const cluster = require('cluster');
const os = require('os');

if(cluster.isMaster) {
  cluster.on('exit', (worker, code, signal) => {
    // TODO vary your logging and restart logic here depending on why your process has failed

    sails.log.error(`${worker.process.pid} exited with code '${code}' for signal '${signal}'.  Was suicide?  ${worker.suicide}`);

    sails.log.info('Restarting worker...');
    cluster.fork();
  });

  const cpuCount = os.cpus().length;
  sails.log.info(`starting ${cpuCount} workers...`);
  for(let i=0; i<cpuCount; ++i) {
    sails.log.info(`starting worker...`);
    cluster.fork();
  }
} else {
  log.info('Worker started.  Starting sails...');
  sails.lift(rc('sails'));
}

@johnabrams7
Copy link
Contributor

@alxndrsn @tarunjadhwani Here are some sails docs for setting up a multi-server / clustered setup through a hosted Redis instance.

Also: preparing for a clustered environment and deploying your own cluster in sails.

Hope this helps 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests