New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Full replication of Maps #360
Comments
Take a loot at a beta replicated map implementation over Hazelcast (using topics); |
Looks good. Would be nice to provide event callbacks to signal when replication is complete. |
I was having a play with https://github.com/hazelcast/hazelcast-replicated-map/ and I noticed that it isn't fully implemented, as it is missing these features :
Are there plans to either complete this implementation, or integrate a fully implemented version of a ReplicatedMap into hazelcast core ? |
Any update on this in the past year? This is my primary use case. Hazelcast looks very tasty but until replication is present and rock-solid, I'm stuck with Ehcache... |
ReplicatedMap is a new feature of upcoming Hazelcast 3.2, so I'm going to close this issue :) Didn't even knew there is such an old one :) Only clear is not supported since it is not fully asynchronously implementable. It always needs locks since the implementation does not support ordered writes but solves conflicts using a vector clock algorithm. |
@noctarius was this issue ever implemented in 3.2, as you said? |
It was removed from the release because we had randomly failing tests and it doesn't stand my own quality standards, it was also missing features like ordered writes. We try to come up with a new / better implementation to 3.3, but not yet sure about the release. Sorry to say that. |
Sorry to hear that. I was waiting for replicated map implementation too. |
It is definitely at the roadmap and will come but it isn't practical to offer an unreliable implementation. |
Sure. Hope you can add this feature in 3.3. |
An existing leader failure detection algorithm and subsequent `PreVoteTask`/`VoteTask` work in the following way. `LeaderFailureDetectionTask` is a periodic task, running every 2-3 sec (_leader-election-timeout-in-millis + random_) on all Followers: - task checks that the leader is reachable via `raftIntegration.isReachable(leader)` - task checks that `lastAppendEntriesTimestamp + maxMissedLeaderHeartbeatCount * heartbeatPeriodInMillis < currentTimeMillis` If the check fails it runs `PreVoteTask`. The `PreVoteTask` has protection from disruptive followers - the so-called _leader stickiness_ check. This check should only give a vote for the candidate if the current voting follower also agrees that the leader is not available. The current implementation of the check looks like this: `if (raftNode.lastAppendEntriesTimestamp() > Clock.currentTimeMillis() - raftNode.getLeaderElectionTimeoutInMillis()) {...}` This check is not valid, because `getLeaderElectionTimeoutInMillis()` returns a random number from 2 to 3 seconds, but the `lastAppendEntriesTimestamp` (without additional activity) is updated every 5 seconds. As a result, the follower can give a vote for the candidate even if the leader is alive. To fix, this the leader stickiness should use the same checks as a `LeaderFailureDetectionTask`: - checks that the leader is reachable via `raftIntegration.isReachable(leader)` - checks that `lastAppendEntriesTimestamp + maxMissedLeaderHeartbeatCount * heartbeatPeriodInMillis < currentTimeMillis` (cherry picked from commit aa73ff7ed601f9b80b27c08a6cab8e4748fd2ea0) Backport of: https://github.com/hazelcast/hazelcast-mono/pull/326 GitOrigin-RevId: 583fa4116450e19dbef67710a715c8fab8081544
Hazelcast should allow Maps and (other data structures) to be fully replicated across the cluster. This is similar to a replicated region in GemFire, see http://community.gemstone.com/display/gemfire60/Replication
Our use case requires data structures such as SSN-register to be complete on all nodes for processing and queries. A near cache only solves half the problem as it is not pre-loaded. Using read-backup-data only solves the problem for the maximum allowed backup count (4 sync + 4 async?).
The text was updated successfully, but these errors were encountered: