New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi-process deployment #170
Comments
I coin this Armada |
Love it On Fri, Mar 8, 2013 at 1:59 PM, Dennis Bartlett notifications@github.comwrote:
Mike McNeil
If you are not the intended recipient, empl |
This will need to be planned out before attempted to be implemented. We need to figure out the following: Master - Node relationship and communication. I'm sure there are more things to think about here. |
Hmm. I thought we were talking about cluster support, like this http://nodejs.org/api/cluster.html So that you could run sails easier on multi-core machines. Integrating fleet into sails does not make a lot sense to me, but maybe I'm wrong. Fleet is a devops deploy tool that should more or less work with sails out of the box. |
As long as people aren't maintaining state in their sails applications, scaling won't take much more work than sticking your nodes behind a load balancer. Just like scaling any other node application. |
@techpines Good point-- in that case, let's assume we're deploying on single-core instances behind a load balancer and shift our focus to:
We'd be replacing the current config:
(remove store option) With:
We wouldn't actually be using the redis adapter yet, but at least this way the API won't change when we do |
One last thing- am I understanding right that for #172 that once we configure socket.io to do this, it can still sit behind a load balancer and be ok? the ws:// requests will be load balanced as well right? |
Ak, one more thing, in this setup, where is the SSL certificate dealt with- in the load balancer? Or in each node.js server instance? This is relevant for https:// and wss:// requests. Thanks -mm |
Yea we can get sessions and web sockets going first. As for load balancers, I think nginx or a native nodejs balancer could support web sockets. |
Awesome- I'll check with engine and see how they've been LBing their Sails On Fri, Mar 15, 2013 at 4:49 PM, Brad Carleton notifications@github.comwrote:
Mike McNeil
If you are not the intended recipient, empl |
Best practice for scalability at the moment is putting instances behind a load balancer. For utmost efficiency, you'd want to provision single-core compute units (using a service like Modulus or Nodejitsu), but you may want to use Joyent, EC2, or Rackspace servers-- it's just that getting more cores isn't going to be offering any additional benefits. Then, pull out shared memory state (your session and socket store) into Redis, make sure you're using a scalable solution for your main app database, and you should be good to go from a data side. As far as assets, you'll want to push gzipped images & minified CSS and JS out into a content delivery network like Cloudfront. For help with that, check back here or in the #sailsjs channel on freenode (http://webchat.freenode.net/) |
@mikermcneil Session stuff is done. Do you have a syntax for redis socket.io config? |
Something like: sockets: {
adpater: 'memory'
} |
@techpines that makes sense to me For this and the session store, we'll just want to make sure and generate the default config in new projects with a note about the possible options (since we're not actually using the actual adapters yet) Thanks! |
Hey I just pushed up the redis changes. I had it working on a trivial example with sessions and pub/sub working simultaneously. |
Awesome, can't wait to take a look. |
Pushing this off to "Someday" since the n-single-process-instances-behind-a-load-balancer approach works as a scalability approach for the immediate term, especially coupled with a Node.js-oriented PaaS provider like Nodejitsu or Modulus. |
Full Redis session and MQ support is in as of 2226824 and will be released as part of 0.9. If you can't wait, it's in the master branch! |
See http://rowanmanning.com/posts/node-cluster-and-express/ for an example of how to get an express app running with cluster (its actually quite easy) I wanted to migrate a project from express to sails, but I would hate to sacrifice multi-core performance. Since getting express working with cluster is very straightforward, and since sailsjs is based on express, would it be a leap to say its easy to get sailsjs to work with cluster? I am asking because I am complete new to sails (i.e. I've only started looking into it today). |
Thought of sharing what I do to use multiple cores with my Sails app. I simply run my app using pm2 with the switch |
I have been looking for some way to use sails with clustering but I couldn't find it and @alghamdi 's way using |
agreed pm2 seems to be the solution for now. |
Any intension to use cluster/master when instancing Express? I played around with pm2 and finally pm2 was not able to manage the processes and produced errors, at the end the whole pm2 utility was not working anymore, don't know what went wrong. I think sails could simply take a config to setup the number of workers and maybe the session store as discussed before. It would be much easier than setting up NGINX, setting up multiple startup scripts for all sails instances etc. - so the deployment of such systems get a lot more complicated. |
@seti123 |
Are you not using "sails lift" ? When I use node app.js with sails I get "app is not defined" I know (we got already larger configs running 5 Node Apps for one System, with automatic setupscripts with ~ 200 lines ...) - and I try to keep it small, all other node apps we use cluster but in sails it is not working like in our other Apps. I just want to remark that pm2 made chaos on my server (spreading many processes like in a loop, and finally inconsistant state like getting errors that it can' find the process ID's state, only a reboot did help). But this issue should not be discussed here. |
@seti123 sails.js does have ~Oh.. tell me about deploying multiple node.js's.. |
@alghamdi @mikermcneil @pmalek @mikedevita I am trying to run sails with pm2 ( |
@Globegitter I've never run into this before but perhaps is just a permission issue. Try removing |
maybe could try these code in hacking app.js and then start your site use command "forever start app.js": |
My app is running fine with |
@juanpasolano maybe storing sessions in Redis or MongoDB could help. |
@alghamdi Thanks, I actually did this but with mongo, which I didn't know was so simple with sails. |
Is it possible to use nodejs clusters in sailsjs. |
@rachaelshaw we should have sailsbot reopen closed issues when folks comment (either that or lock them like has become vogue in other projects recently. I like the reopen approach though) @tarunjadhwani this is possible! (And recommended.) My experience has been that PaaS solutions like Heroku do this automatically in most cases, but if you're doing something more custom or bare metal, you'll want to set it up yourself. (Check out |
@mikermcneil is there an example implementation of this? Or are you just suggesting changing // Start server
sails.lift(rc('sails')); to e.g.: const cluster = require('cluster');
const os = require('os');
if(cluster.isMaster) {
cluster.on('exit', (worker, code, signal) => {
// TODO vary your logging and restart logic here depending on why your process has failed
sails.log.error(`${worker.process.pid} exited with code '${code}' for signal '${signal}'. Was suicide? ${worker.suicide}`);
sails.log.info('Restarting worker...');
cluster.fork();
});
const cpuCount = os.cpus().length;
sails.log.info(`starting ${cpuCount} workers...`);
for(let i=0; i<cpuCount; ++i) {
sails.log.info(`starting worker...`);
cluster.fork();
}
} else {
log.info('Worker started. Starting sails...');
sails.lift(rc('sails'));
} |
@alxndrsn @tarunjadhwani Here are some sails docs for setting up a multi-server / clustered setup through a hosted Redis instance. Also: preparing for a clustered environment and deploying your own cluster in sails. Hope this helps 👍 |
Use cluster or fleet to allow for multi-instance deployment
The text was updated successfully, but these errors were encountered: