-
Notifications
You must be signed in to change notification settings - Fork 529
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi-datacenter, multi-rack, multi-node Dynomite config #569
Comments
Since everything is working I would say you are good to go. One thing that I am curious is why are you multiple Dynomite processes in each node. You could just run one. |
Hi @ipapapa I think config is fine. I figured out a lot since I asked this question. Now the most confusing part is actually Dyno side. I describe it in Netflix/dyno#226 Thanks! |
For example, in host A 216.11.111.1, you seem to have two Dynomite configurations. One that listens to 8102 and one that listens to 8104. For us, we have at least one host per rack. See an example: https://github.com/Netflix/dynomite/blob/dev/conf/redis_rack1_node.yml |
I understand your example. The example has only 1 node per rack - there are 3 racks, each rack has 1 node. |
So you know, I am not familiar with those racks, nodes, etc. But now I think, you named these from real physical "racks" etc. Then I think "node" refers to a server in a "rack"? |
Ok, so I changed my Dynomite cluster. I have
So total 2 nodes in the datacenter. I host each node in own server. Below is config. It is working. redis_rack1_node_mari.yml
redis_rack2_node_mari.yml
|
I still want to know only 1 thing for clarity - I wrote that in my comment above but here, in short: |
You can do whatever works for you. As I mentioned above, the configuration that you had above was working so it was OK. My comment was more about failure scenarios that you may see later on. For example, if you have a "server" with two hosts, and the server has a connectivity issue, then both hosts would have a connectivity issue, then you will have a network partition in more than one Dynomite processes. That is why we have one host per one server. If one server goes down, then only a portion of the token range has an issue (and we have 1-2 replicas to failover so as not to bubble the issue to the client and then we can warm up the node). However, this is not mandatory. It depends on how you manage your underlying infrastructure. |
I understand what you mean @ipapapa, thank you so much for your patience. I've been an app developer so I have not worked on backend data storage design. Apology for my lack of knowledge. |
Hello, I just started leaning Dynomite last week. I need help if my Dynomite configs are right.
I am not using Amazon AWS at all, everything is in-house. I want to set up multi-datacenter, multi-rack, multi-node cluster in order to figure out configuration. Also, I am writing client-side code using Dyno client (Dyno-Jedis) so that I can make this work end-to-end. I write API in Dropwizard framework so I can set a value via API through Dyno into Dynomite then eventually Redis. That is what I mean "end-to-end".
I set up Dynomite cluster. I used 2 hosts.
host A: 216.11.111.1
host B: 216.22.222.2
**dc1: 1 rack. All nodes are hosted in host A: 216.11.111.1 **
/***********************************/
dc2: 2 racks. All nodes are hosted in host B: 216.22.222.2
Let me know if this looks good or not. The dynomite cluster seems to be working - I am hitting this from my Dyno client code and it writes to/reads from Redis.
I have questions in Dyno client side but I will ask that in Dyno github.
I appreciate your help!
The text was updated successfully, but these errors were encountered: