Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-datacenter, multi-rack, multi-node Dynomite config #569

Closed
marimcmurtrie opened this issue Jun 20, 2018 · 9 comments
Closed

Multi-datacenter, multi-rack, multi-node Dynomite config #569

marimcmurtrie opened this issue Jun 20, 2018 · 9 comments
Labels

Comments

@marimcmurtrie
Copy link

marimcmurtrie commented Jun 20, 2018

Hello, I just started leaning Dynomite last week. I need help if my Dynomite configs are right.

I am not using Amazon AWS at all, everything is in-house. I want to set up multi-datacenter, multi-rack, multi-node cluster in order to figure out configuration. Also, I am writing client-side code using Dyno client (Dyno-Jedis) so that I can make this work end-to-end. I write API in Dropwizard framework so I can set a value via API through Dyno into Dynomite then eventually Redis. That is what I mean "end-to-end".

I set up Dynomite cluster. I used 2 hosts.

host A: 216.11.111.1

  • CentOS 7
  • dynomite-v0.6.2-43-gdf884c0

host B: 216.22.222.2

  • CentOS 6.6
  • dynomite-v0.6.2-35-g68a58ab

**dc1: 1 rack. All nodes are hosted in host A: 216.11.111.1 **

dyn_o_mite:
  datacenter: dc1
  rack: dc1_rack1
  dyn_listen: 0.0.0.0:8101
  dyn_seeds:
  - 127.0.0.1:8103:dc1_rack1:dc1:4294967294
  - 216.22.222.2:8101:dc2_rack1:dc2:1383429731
  - 216.22.222.2:8103:dc2_rack2:dc2:2147483647
  - 216.22.222.2:8107:dc2_rack2:dc2:4294967294
  listen: 0.0.0.0:8102
  servers:
  - 127.0.0.1:6379:1
  tokens: '2147483647'
  secure_server_option: datacenter
  pem_key_file: conf/dynomite.pem
  data_store: 0
  read_consistency : DC_SAFE_QUORUM
  write_consistency : DC_SAFE_QUORUM
  stats_listen: 0.0.0.0:22221

dyn_o_mite:
  datacenter: dc1
  rack: dc1_rack1
  dyn_listen: 0.0.0.0:8103
  dyn_seeds:
  - 127.0.0.1:8101:dc1_rack1:dc1:2147483647
  - 216.22.222.2:8101:dc2_rack1:dc2:1383429731
  - 216.22.222.2:8103:dc2_rack2:dc2:2147483647
  - 216.22.222.2:8107:dc2_rack2:dc2:4294967294
  listen: 0.0.0.0:8104
  servers:
  - 127.0.0.1:6379:1
  tokens: '4294967294'
  secure_server_option: datacenter
  pem_key_file: conf/dynomite.pem
  data_store: 0
  read_consistency : DC_SAFE_QUORUM
  write_consistency : DC_SAFE_QUORUM
  stats_listen: 0.0.0.0:22222

/***********************************/

dc2: 2 racks. All nodes are hosted in host B: 216.22.222.2

dyn_o_mite:
  datacenter: dc2
  rack: dc2_rack1
  dyn_listen: 0.0.0.0:8101
  dyn_seeds:
  - 127.0.0.1:8103:dc2_rack2:dc2:2147483647
  - 127.0.0.1:8107:dc2_rack2:dc2:4294967294
  - 216.11.111.1:8101:dc1_rack1:dc1:2147483647
  - 216.11.111.1:8103:dc1_rack1:dc1:4294967294
  listen: 0.0.0.0:8102
  servers:
  - 127.0.0.1:6378:1
  tokens: '1383429731'
  secure_server_option: datacenter
  pem_key_file: conf/dynomite.pem
  data_store: 0
  read_consistency : DC_SAFE_QUORUM
  write_consistency : DC_SAFE_QUORUM
  stats_listen: 0.0.0.0:22221

dyn_o_mite:
  datacenter: dc2
  rack: dc2_rack2
  dyn_listen: 0.0.0.0:8103
  dyn_seeds:
  - 127.0.0.1:8101:dc2_rack1:dc2:1383429731
  - 127.0.0.1:8107:dc2_rack2:dc2:4294967294
  - 216.11.111.1:8101:dc1_rack1:dc1:2147483647
  - 216.11.111.1:8103:dc1_rack1:dc1:4294967294
  listen: 0.0.0.0:8104
  servers:
  - 127.0.0.1:6377:1
  tokens: '2147483647'
  secure_server_option: datacenter
  pem_key_file: conf/dynomite.pem
  data_store: 0
  read_consistency : DC_SAFE_QUORUM
  write_consistency : DC_SAFE_QUORUM
  stats_listen: 0.0.0.0:22222

dyn_o_mite:
  datacenter: dc2
  rack: dc2_rack2
  dyn_listen: 0.0.0.0:8107
  dyn_seeds:
  - 127.0.0.1:8101:dc2_rack1:dc2:1383429731
  - 127.0.0.1:8103:dc2_rack2:dc2:2147483647
  - 216.11.111.1:8101:dc1_rack1:dc1:2147483647
  - 216.11.111.1:8103:dc1_rack1:dc1:4294967294
  listen: 0.0.0.0:8106
  servers:
  - 127.0.0.1:6377:1
  tokens: '4294967294'
  secure_server_option: datacenter
  pem_key_file: conf/dynomite.pem
  data_store: 0
  read_consistency : DC_SAFE_QUORUM
  write_consistency : DC_SAFE_QUORUM
  stats_listen: 0.0.0.0:22223

Let me know if this looks good or not. The dynomite cluster seems to be working - I am hitting this from my Dyno client code and it writes to/reads from Redis.

I have questions in Dyno client side but I will ask that in Dyno github.

I appreciate your help!

@ipapapa
Copy link
Contributor

ipapapa commented Jun 25, 2018

Since everything is working I would say you are good to go. One thing that I am curious is why are you multiple Dynomite processes in each node. You could just run one.

@marimcmurtrie
Copy link
Author

marimcmurtrie commented Jun 25, 2018

Hi @ipapapa
Thank you for your response. I am not running multiple processes in each node (or I am not getting what you said). What made you think that? Is it because some nodes have the same token? I thought token has to be unique per rack, isn't it? Let me know so I can understand.

I think config is fine. I figured out a lot since I asked this question. Now the most confusing part is actually Dyno side. I describe it in Netflix/dyno#226

Thanks!

@ipapapa
Copy link
Contributor

ipapapa commented Jul 7, 2018

For example, in host A 216.11.111.1, you seem to have two Dynomite configurations. One that listens to 8102 and one that listens to 8104. For us, we have at least one host per rack. See an example: https://github.com/Netflix/dynomite/blob/dev/conf/redis_rack1_node.yml

@marimcmurtrie
Copy link
Author

I understand your example. The example has only 1 node per rack - there are 3 racks, each rack has 1 node.
Mine here is, 1 rack with 2 nodes, and those 2 nodes are hosted in the same server (host 216.11.111.1). That is why I have 2 configurations in the same IP. Is it wrong or not expected? Maybe I am confused (sorry).

@marimcmurtrie
Copy link
Author

So you know, I am not familiar with those racks, nodes, etc. But now I think, you named these from real physical "racks" etc. Then I think "node" refers to a server in a "rack"?
If so, then I think you guys think it is a common sense that each node should be hosted its own server. Is it your assumption?
If so, then I lack the knowledge (sorry). I am just reading Dynomite configuration only to see possible cluster configuration. A person like me, who does not have data center, etc knowledge, this is confusing part. I will write this out somewhere so that I (we) can improve this.
Anyways, please confirm above statement.
Also, I am gonna set up my Dynomite cluster so that each node is hosted in own server. That way it is less confusing. Thank you!

@marimcmurtrie
Copy link
Author

Ok, so I changed my Dynomite cluster. I have

  • 1 data center
  • 2 racks
  • Each rack has 1 node

So total 2 nodes in the datacenter. I host each node in own server. Below is config. It is working.

redis_rack1_node_mari.yml

dyn_o_mite:
  datacenter: dc
  rack: rack1
  dyn_listen: 0.0.0.0:8101
  dyn_seeds:
  - 216.11.111.1:8101:rack2:dc:1383429731
  listen: 0.0.0.0:8102
  servers:
  - 127.0.0.1:6379:1
  tokens: '1383429731'
  secure_server_option: datacenter
  pem_key_file: conf/dynomite.pem
  data_store: 0
  read_consistency : DC_SAFE_QUORUM
  write_consistency : DC_SAFE_QUORUM
  stats_listen: 0.0.0.0:22221

redis_rack2_node_mari.yml

dyn_o_mite:
  datacenter: dc
  rack: rack2
  dyn_listen: 0.0.0.0:8101
  dyn_seeds:
  - 216.22.222.2:8101:rack1:dc:1383429731
  listen: 0.0.0.0:8102
  servers:
  - 127.0.0.1:6377:1
  tokens: '1383429731'
  secure_server_option: datacenter
  pem_key_file: conf/dynomite.pem
  data_store: 0
  read_consistency : DC_SAFE_QUORUM
  write_consistency : DC_SAFE_QUORUM
  stats_listen: 0.0.0.0:22222

@marimcmurtrie
Copy link
Author

I still want to know only 1 thing for clarity - I wrote that in my comment above but here, in short:
Should we host each node separately?
Thanks

@ipapapa
Copy link
Contributor

ipapapa commented Jul 11, 2018

You can do whatever works for you. As I mentioned above, the configuration that you had above was working so it was OK. My comment was more about failure scenarios that you may see later on. For example, if you have a "server" with two hosts, and the server has a connectivity issue, then both hosts would have a connectivity issue, then you will have a network partition in more than one Dynomite processes. That is why we have one host per one server. If one server goes down, then only a portion of the token range has an issue (and we have 1-2 replicas to failover so as not to bubble the issue to the client and then we can warm up the node). However, this is not mandatory. It depends on how you manage your underlying infrastructure.

@marimcmurtrie
Copy link
Author

I understand what you mean @ipapapa, thank you so much for your patience. I've been an app developer so I have not worked on backend data storage design. Apology for my lack of knowledge.
But yes, as you said, if I have multiple nodes hosted in 1 server, then if the server failed, then i lose multiple nodes. I know you guys have at least 2 racks per data center for high availability. That is not mandatory but you do so to satisfy your needs.
I am currently playing with Dynamite and I have only 2 Linux boxes to play with. With those 2 boxes, I set up various Dynamite configurations (1 single node cluster to this multi-datacenter multi-rack case), that is why I came up with such config and asked questions. But now, I think I got it.
Thank you so much! I close this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants