Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IMap entry eviction issue #9095

Closed
pramodterdale opened this issue Oct 12, 2016 · 8 comments

Comments

Projects
None yet
4 participants
@pramodterdale
Copy link

commented Oct 12, 2016

I am using IMap to store huge amount of data and each key has specified time-to-live, but some of the keys are not getting evicted at all even after it passed the time to live value. These keys stays forever, so when i check the map size progarmatically it shows there are entries in the map, same is shown in mancenter but when i try to get the list of keys or values nothing comes back. One more observation is that i have backup enabled with count 1, but for these entries there is no backup as well. I have below properties in hazelcast.xml

<partition-group enabled="true"/>
<map name="default">
  <time-to-live-seconds>0</time-to-live-seconds>
  <eviction-policy>NONE</eviction-policy>
</map>

Code to add messages into the map

     msgMap.put(msgId, value, 30, TimeUnit.SECONDS);

I have registered the EntryListener for eviction and doing the final activity on evicted entries, most of the entries are getting evicted as expected but some entries are never getting evicted. The final processed enrties are less than total input entries and missing count exactly matches with the map entry size at the end. What could be wrong, why are not all the messages are getting evicted. Am i missing somthing? Attaching the mancenter screenshot

   IMap<Long, String> msgMap = getMap("1070810_RM_map");
  System.out.println("MapSize:" + msgMap.size() + " :msgMap:" + msgMap + " :entrySet:" +   msgMap.entrySet());

The output is

    MapSize:433 :msgMap:1070810_EM_map :entrySet:[]

image

@ahmetmircik

This comment has been minimized.

Copy link
Member

commented Oct 12, 2016

Hi @pramodterdale, what was your version? Can you test it with latest version 3.7.2?

@jerrinot jerrinot modified the milestones: Backlog, 3.8 Oct 13, 2016

@pramodterdale

This comment has been minimized.

Copy link
Author

commented Oct 19, 2016

I am able to reproduce the issue on version 3.7 as well as 3.7.2, below screenshot is from 3.7.2 and last one i provided was from 3.7

hazelcast1

@ahmetmircik ahmetmircik self-assigned this Oct 20, 2016

@ahmetmircik

This comment has been minimized.

Copy link
Member

commented Oct 20, 2016

Hi @pramodterdale,

First of all, let me clarify some terms. In hazelcast parlance, expiration and eviction are two different things. Eviction is needed to free memory according to size based policies like map-size or memory-size etc. Expiration is needed to determine life-span of an entry. If your goal is freeing memory you should configure eviction.

In your case, there is no eviction defined since it is NONE --><eviction-policy>NONE</eviction-policy>. But you are using expiration. Hazelcast Map removes expired entries gradually with a background sweeper task. Map size can return those expired but not yet removed entries during that sweep cycle but you can not reach them for ex. via imap#get, eventually all expireds will be removed from map. To accelerate this expiration process please see my comments here and see
this for a detailed explanation about those system properties. (Those system properties are available in version 3.7.2, not included in 3.6)

Btw, i did a test with your given configuration, all entries were swept eventually. Seems no problem.

@pramodterdale

This comment has been minimized.

Copy link
Author

commented Oct 20, 2016

It does not happen every time and unfortunately i did not see any pattern as well, the volume of data is high about 5k/sec insert, update or delete any of these combinations and the process i ran for couple of hours when i saw this behavior.

I will try the options you have specified in above comments and see if same behavior occurs.

Thanks

@ahmetmircik ahmetmircik modified the milestones: 3.7.3, 3.8 Oct 25, 2016

@ahmetmircik

This comment has been minimized.

Copy link
Member

commented Oct 25, 2016

any update?

@pramodterdale

This comment has been minimized.

Copy link
Author

commented Oct 27, 2016

The above mentioned settings are working fine but still there is a delay, one more thing i noticed is the sweeper task/thread is slow in processing, is there any way to increase number of sweeper tasks/threads, if yes what is the property that i can use..?

@jerrinot jerrinot modified the milestones: 3.7.4, 3.7.3 Nov 1, 2016

@ahmetmircik

This comment has been minimized.

Copy link
Member

commented Nov 23, 2016

one more thing i noticed is the sweeper task/thread is slow in processing

why do you think it is slow?

@jerrinot jerrinot modified the milestones: 3.7.5, 3.7.4 Nov 29, 2016

@degerhz degerhz modified the milestones: 3.7.5, 3.7.6 Jan 13, 2017

@jerrinot

This comment has been minimized.

Copy link
Contributor

commented Feb 10, 2017

Hi,

as explained by Ahmet expired entries are not accessible as specified in the expiration interval, however physically they might be removed later. this is working as designed.

@jerrinot jerrinot closed this Feb 10, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.