-
Notifications
You must be signed in to change notification settings - Fork 356
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Astyanax client accessing cross datacenter on CL_LOCAL_QUORUM for write & read. #268
Comments
You need to tell the client which DC it is in. Try adding this to the ConnectionPoolConfigurationImpl .setLocalDatacenter("DC1") This will cause the client to filter out the nodes from the other DC. |
thanks for the quick response. If we set the DC, will it fail if both the nodes in that dc are down? Also looking into this, Will this mostly connect to local dc and also access other dc, if local dc nodes are down? |
Yes, if all nodes in that DC are down the client will fail. I'm curious why you chose such a configuration. It's not very fault tolerant. Our most basic deployment is to have a minimum of 3 nodes per DC and a replication factor of 3. This type of setup let's you loose up to 1 node without any impact on quorum. If you're OK with consistency level ONE then you could have 2 nodes per DC. |
sorry for the late response. had to move to some other high-priority work.. let me understand what ur saying. we have the current configuration: config: replication factor: 1, consistency level: local_quorum, nodes: 2 per dc config: replication factor: 2, consistency level: local_quorum, nodes: 2 per dc config: replication factor: 3, consistency level: local_quorum, nodes: 3 per dc Correct me if my understanding is wrong.. and sorry for going off topic here.. Is this way your saying we need to go with local_quorum and replication factor of 3. -srrepaka |
Looks like the issue has been already answered. Please reopen if you have any questions. |
Astyanax client accessing cross datacenter on CL_LOCAL_QUORUM for write & read.
I have multi datacenter setup with 2 (DC1) & 1 (DC2) node in each dc (as shown below). Keyspace and Astyanax Context with the following configuration. I have 2 nodes in DC1 and 1 node in DC2. When the client is running from DC1, the request some times goes to DC2 (when looking at the host on result.getHost()). I expected the CL_LOCAL_QUORUM to always go to the local datacenter. Am i configuring some thing wrong? Is this some thing to do with RING_DESCRIBE?
CREATE KEYSPACE grd WITH replication = {
'class': 'NetworkTopologyStrategy',
'DC1': '1',
'DC2': '1'
};
Host where the request went to is captured using:
OperationResult<CqlResult<UUID, String>> result = pcqlQuery.execute();
logInfo.append("{ host= " + result.getHost());
thanks,
srrepaka
The text was updated successfully, but these errors were encountered: