New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How do I configure multiple nodes on a single server? #444
Comments
As far as I know, ES maps nodes:servers::1:1, so you would need multiple servers (be they VMs, containers, bare steel). That said, you can run a single-node “cluster” if you like. |
@jeffbyrnes I have a cluster that someone built with ansible that has three "nodes" (maybe instances is a better word).
It has three config files:
I would happily copy this config, but I'd like to use chef and this supermarket recipe instead of ansible to build my systems out. (We are using chef for 'everything' else when I inherited this.) |
Happy to be corrected! I’d suspect your issue lies with the need to give each instance of an See lines 136–137 of I’d ask though, why multiple nodes on a single machine? Unless you also plan to have each “node” read/write its data to a different disk, I’m unclear what you gain from this approach. We started with a 3-node cluster initially, and have slowly scaled it upwards as disk & memory needs have climbed. |
Why have more than one node or instance on a single server? I have more than 30GB of memory, and one instance of ES can only use 30GB of memory.... I didn't spec the hardware, or design the ES configuration. I was told how they should look in the end. We are using ansible today, I'd like to use chef, since that is where we are heading with our Configuration Management. |
Makes perfect sense, in that case! We designed around horizontal scaling, so my bias runs in the direction of more, smaller, machines. I don't have any cycles to help change this behavior right now, so I can only encourage you to take a stab at it yourself in a fork, or see if the maintainers can help out. |
Hi, just a note that running multiple Elasticsearch instances per node is indeed valid and quite common -- usually in the situation @timw077 describes, where you have a powerful server with GBs of RAM etc. (There are some caveats for this setup, eg. trying to separate master and data nodes, ideally dedicating separate machines (virtual or physical) for masters, so they are not affected by any ripples in the data nodes.) I bet @martinb3 had this setup in mind when doing the refactoring the cookbook, maybe he can shed more light here -- in any case, it is something the cookbook should support. |
Love learning new stuff. Thanks for the eye-opening. |
@timw077 I haven’t sorted it out entirely myself, and have to step away to work on some other stuff, but I think you’ll need to specify a unique |
@timw077 had emailed me about this too. It sounds like we should probably do a blog post and/or README.md entry about it. |
Here is my test that continues to fail: |
Hi folks -- just an update: I've been working on this, and found the issue. It has to do with default values for resources being copied around after being modified. I'm working on a fix, and then I'll file an issue describing the problem, as well as a PR to fix it, and a blog post on how to run multiple instances. |
@timw077 can you try the latest release of the cookbook, v2.2.2? |
The elasticsearch.yml has the values I expect.
The services start and run. Thanks! |
Great! Thanks! |
@timw077 I've also placed an example here, if it helps: |
I was just fixing my recipe to plagiar... leverage that example. |
👍 |
I don't know if this matters, and since I just learned how to spell elasticsearch a couple weeks ago, I'm not sure I'm in a place to defend one way or another. ;) The guy that set this up, doesn't work here anymore, and we are redoing a lot of his work for a various reasons. @martinb3 The only thing that seems different to how things are working in production and with the example, is in my kitchen converge, I'm getting: (cluster_name => kitchen ) instead of what I have in production: I think with a careful mv command, I can fix this while ES is stopped and be okay. All and all, we are all set. Thanks for fixing this. |
@timw077 This should be something you can override by passing a data.dir to each instance's |
[My production system was setup with ansible. I'm trying to migrate to chef - since we are using it to manage everything else on the systems, and across the board. ] When I set data.dir, all the instances use: Both instances are using the same instance id ("0") The 2nd instance files to start with this error:
The files are writeable...
|
@martinb3, that's great news, thanks!! |
Hi Tim, In your example before, your grep showed you were seeing the data.dir distinctly for each instance. Your more recent comment/example shows something different, but you took down the example gist you were using. If you want to put that gist back up, or show me the specific example you're using, I can keep trying to debug it; I'm not sure the provider we wrote has been tested where you're putting the data dir inside the home path (/usr/share/elastic...). We should probably open a separate issue for this item, though, since the initial reported problem was fixed you had gotten multiple instances working okay. Cheers, Martin |
I would have expected
/etc/elasticsearch/elasticsearch-data-1.yml
and/etc/elasticsearch/elasticsearch-data-2.yml
to get created. Instead/etc/elasticsearch/elasticsearch.yml
just keeps getting clobbered back and forth.The text was updated successfully, but these errors were encountered: