Not precise PER_NODE capacity calculation algorithm #11646
During the HZ upgrade from
This unit test passes in
But those fixes look incorrect because of the following reasons:
So proposed fix for
And deletion of just inserted object has to be avoided in
Does it make sense?
Another thing is those changes are poorly documented. Setting per node capacity to be bigger than number of nodes wasn't a requirement in 3.5. Documentation for 3.6 added it as requirement, but without clear explanation what is going to happen if max-size is set to a value lower than the partition count:
As unit test above shows, it's possible to set max-size lower than number of partitions, if eviction policy is set to
And a last thing, in release docs for 3.6, the link to corresponding issue is broken (it's under "3.6-EA2 Fixes" section):
(it opens a PR for "Remove XML namespace injection during schema validation")
The text was updated successfully, but these errors were encountered:
@Mak-Sym thanks a lot for the detailed issue report. Since Hazelcast version 3.8.3 a warning is logged if the per-node max size value does not allow any values to be inserted in the map, like this:
Logging was introduced with #10821 . I see the reasoning behind your attempt to find the exact number of entries stored in the local member but there are some important caveats:
It is described in the reference manual and the XSD for declarative config that
Thank you for explanation @vbekiaris .
I agree that current algorithm is faster, but, on the other hand, we need to traverse only local partitions - is that operation such expensive? It is definitely more expensive than to perform arithmetic operations, but does not require remote invocations or something like that, so still has to be pretty fast. Am I right or did I miss something?
Now, if we decided to stick with it, does it make sense to perform a put operation at all, when per node capacity (according to the new algorithm) is < 1 and eviction policy is not NONE? So, instead of adding the data and then removing it immediately in partition thread (which can be a remote operation), it's better to avoid put operation at all, or ban eviction of just inserted data.
Also, issue link in release docs for 3.6 has to be fixed (pls. see my comment above)
You are right, no remote operations are involved. Still, the main concern is that accessing other partitions' record stores from another partition thread is incompatible with Hazelcast's threading model.
/cc @Serdaro can this be fixed?
Thank you @Serdaro !