Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Eviction with USED_HEAP_SIZE may end up with a OOME on cluster members #13529

alparslanavci opened this issue Aug 3, 2018 · 1 comment


Copy link

@alparslanavci alparslanavci commented Aug 3, 2018

The test:

  • Server defines a map with an eviction max size policy of “USED_HEAP_SIZE” with a limit far below the max heap size (200 MB in this case). Set in-memory format to BINARY as needed for this eviction size policy. Set eviction policy to LRU.
  • Client repeatedly “set”s objects into the map for different keys. These objects start small and increase in size. Max size is 10 MB, so far smaller than the eviction policy size limit.
  • Once eviction policy limit is reached, the map size appears to increase with each “set” even though some eviction occurs. Eventually this leads to an OOME.
Config config = new Config();

String mapName = "map";
config.getMapConfig(mapName).setMaxSizeConfig(new MaxSizeConfig(200, MaxSizeConfig.MaxSizePolicy.USED_HEAP_SIZE)) // 200 MB eviction policy limit with "USED_HEAP_SIZE" mode.
        .setInMemoryFormat(InMemoryFormat.BINARY) // According to documentation this is required for USED_HEAP_SIZE.

HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance(config);

IMap<Integer, byte[]> map = hazelcastInstance.getMap(mapName);
int maxSizeBytes = 10 * 1024 * 1024; // 10 MBytes - This is just so that we know the cached values are individually far smaller than the eviction policy limit.  Test case fails before hitting this size anyway.
for (int i = 0; i < maxSizeBytes; i++) {
    map.set(i, new byte[i]);
    if (i % 1000 == 0) {
        System.out.println("Done " + i + " set operations");
@gregrluck gregrluck added this to the 3.10.5 milestone Aug 17, 2018
@gregrluck gregrluck modified the milestones: 3.10.5, 3.11 Aug 17, 2018
Copy link

@gregrluck gregrluck commented Aug 17, 2018

We are fixing this with an algorithm change which will allow configuration of the number of entries to be evicted. Up to this point it has always been one. Changing it to 2 fixes this test case and it can be raised even higher if needed.

Performance wise, the partition thread has extremely low latency access to the RecordStore so measurable latency is hardly changed. Plus 1 * 2 or 1 + 1 both equal 2 so the overall time spent evicting will remain constant.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

3 participants
You can’t perform that action at this time.