New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixing backward compatibility issue for PER_NODE capacity calculation algorithm #12195
Fixing backward compatibility issue for PER_NODE capacity calculation algorithm #12195
Conversation
@Mak-Sym thanks for your contribution, before we can review & merge can you follow the instructions to sign and send the Hazelcast Contributor Agreement? |
@mmedenjak I already did it. |
Thanks, can you tell me when you sent it? I will try and find out why it got stuck. |
|
hi, I can confirm I received the agreement. I apologize for the delay. |
Thank you!
…On 25 Jan 2018 10:03 p.m., "Jaromir Hamala" ***@***.***> wrote:
hi,
I can confirm I received the agreement. I apologize for the delay.
Happy Hazelcasting!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#12195 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABxtHPu1fdFRqJBtNhGHVlXwGipvvrWDks5tOF9_gaJpZM4RsQ6f>
.
|
+ "Given the current cluster size of %d members with %d partitions, max size should be at " | ||
+ "least %d.", mapConfig.getName(), memberCount, partitionCount, minMaxSize)); | ||
double perNodeMaxRecordStoreSize = (1D * configuredMaxSize * memberCount / partitionCount); | ||
if(perNodeMaxRecordStoreSize < 1) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should compare against MIN_SANE_PER_PARTITION_SIZE here, rather than just 1, so that we never allow fewer records per node than the minimum.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line will also fail with CheckStyle (we require a space here if (
).
Please use CheckStyle on your local machine: mvn clean verify -Pcheckstyle -DskipTests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rjatkins I left ability to set up cache size smaller than MIN_SANE_PER_PARTITION_SIZE
just in case clients want to do that for whatever reason. I tried to fix the case, when cache size doesn't make sense (in other words, when cache size is definitely wrong). At the same time, partition size of 1
may make sense under some conditions. But I can also create useRecomendedMinSettings
flag, which will force set partition size to MIN_SANE_PER_PARTITION_SIZE
(with another warn message probably). What do you think?
@Donnerbart thx for the feedback - code formatting is fixed now.
@rjatkins can you rebase? |
created a doc update issue for this PR: hazelcast/hazelcast-reference-manual#507 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
5 new commits :( ah well... |
Followup on discussion in this issue: #11646. This simple fix adds backward compatibility to data eviction algorithm for cache sizes, smaller than number of partitions. This can be extremely useful especially after introducing dynamic map configuration feature.