Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Full replication of Maps #360

Closed
trond-arve-wasskog opened this issue Nov 23, 2012 · 11 comments
Closed

Full replication of Maps #360

trond-arve-wasskog opened this issue Nov 23, 2012 · 11 comments

Comments

@trond-arve-wasskog
Copy link

Hazelcast should allow Maps and (other data structures) to be fully replicated across the cluster. This is similar to a replicated region in GemFire, see http://community.gemstone.com/display/gemfire60/Replication

Our use case requires data structures such as SSN-register to be complete on all nodes for processing and queries. A near cache only solves half the problem as it is not pre-loaded. Using read-backup-data only solves the problem for the maximum allowed backup count (4 sync + 4 async?).

@mdogan
Copy link
Contributor

mdogan commented Dec 15, 2012

Take a loot at a beta replicated map implementation over Hazelcast (using topics);

https://github.com/hazelcast/hazelcast-replicated-map/

@mmdogan

@trond-arve-wasskog
Copy link
Author

Looks good. Would be nice to provide event callbacks to signal when replication is complete.

@cameronbraid
Copy link

I was having a play with

https://github.com/hazelcast/hazelcast-replicated-map/

and I noticed that it isn't fully implemented, as it is missing these features :

  • keySet : doesn't reflect removed entries
  • clear : isn't replicated at all
  • size : doesn't reflect removed entries
  • etc..

Are there plans to either complete this implementation, or integrate a fully implemented version of a ReplicatedMap into hazelcast core ?

@dunkyboy
Copy link

Any update on this in the past year? This is my primary use case. Hazelcast looks very tasty but until replication is present and rock-solid, I'm stuck with Ehcache...

@noctarius
Copy link
Contributor

ReplicatedMap is a new feature of upcoming Hazelcast 3.2, so I'm going to close this issue :) Didn't even knew there is such an old one :)

Only clear is not supported since it is not fully asynchronously implementable. It always needs locks since the implementation does not support ordered writes but solves conflicts using a vector clock algorithm.

@enesakar enesakar modified the milestones: 3.2+, 3.2 Mar 16, 2014
@docent
Copy link

docent commented Mar 24, 2014

@noctarius was this issue ever implemented in 3.2, as you said?

@noctarius
Copy link
Contributor

It was removed from the release because we had randomly failing tests and it doesn't stand my own quality standards, it was also missing features like ordered writes. We try to come up with a new / better implementation to 3.3, but not yet sure about the release.

Sorry to say that.

@buremba
Copy link
Contributor

buremba commented Mar 24, 2014

Sorry to hear that. I was waiting for replicated map implementation too.

@noctarius
Copy link
Contributor

It is definitely at the roadmap and will come but it isn't practical to offer an unreliable implementation.

@buremba
Copy link
Contributor

buremba commented Mar 24, 2014

Sure. Hope you can add this feature in 3.3.

devOpsHazelcast pushed a commit that referenced this issue Jan 10, 2024
An existing leader failure detection algorithm and subsequent
`PreVoteTask`/`VoteTask` work in the following way.

`LeaderFailureDetectionTask` is a periodic task, running every 2-3 sec
(_leader-election-timeout-in-millis + random_) on all Followers:
- task checks that the leader is reachable via
`raftIntegration.isReachable(leader)`
- task checks that `lastAppendEntriesTimestamp +
maxMissedLeaderHeartbeatCount * heartbeatPeriodInMillis <
currentTimeMillis`
If the check fails it runs `PreVoteTask`.

The `PreVoteTask` has protection from disruptive followers - the
so-called _leader stickiness_ check. This check should only give a vote
for the candidate if the current voting follower also agrees that the
leader is not available.

The current implementation of the check looks like this: `if
(raftNode.lastAppendEntriesTimestamp() > Clock.currentTimeMillis() -
raftNode.getLeaderElectionTimeoutInMillis()) {...}`

This check is not valid, because `getLeaderElectionTimeoutInMillis()`
returns a random number from 2 to 3 seconds, but the
`lastAppendEntriesTimestamp` (without additional activity) is updated
every 5 seconds. As a result, the follower can give a vote for the
candidate even if the leader is alive.

To fix, this the leader stickiness should use the same checks as a
`LeaderFailureDetectionTask`:
- checks that the leader is reachable via
`raftIntegration.isReachable(leader)`
- checks that `lastAppendEntriesTimestamp +
maxMissedLeaderHeartbeatCount * heartbeatPeriodInMillis <
currentTimeMillis`

(cherry picked from commit aa73ff7ed601f9b80b27c08a6cab8e4748fd2ea0)

Backport of: https://github.com/hazelcast/hazelcast-mono/pull/326

GitOrigin-RevId: 583fa4116450e19dbef67710a715c8fab8081544
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants