Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

QueueStore memory-limit seems ignored #11682

Closed
SzabKel opened this issue Oct 28, 2017 · 3 comments
Closed

QueueStore memory-limit seems ignored #11682

SzabKel opened this issue Oct 28, 2017 · 3 comments

Comments

@SzabKel
Copy link

@SzabKel SzabKel commented Oct 28, 2017

I made a test project to try out how the QueueStore<T> and the IQueue<T> works together, but even though I spent some time with it, I cannot get it to work as I expected. I uploaded the project here for reference.

Expected

I point to the QueueStore when defining the Config, set the Queue as unlimited and set the Queue's memory-limit option to an N number. Every time I add an object to my queue, Hazelcast should check If the Queue contains more entries currently than the specified limit. If TRUE, it should only add the object to the QueueStore, If FALSE, it should also add it to the Queue. Later, If the Queue entries leave the queue, and the size of the queue becomes less than the preset limit, It should reload entries from the QueueStore.

Experienced

The memory-limit setting is ignored and the entries are getting inserted into both the store and the queue, even If the limit is reached.

I will happily provide more information if needed.

@mmedenjak mmedenjak added this to the 3.10 milestone Oct 28, 2017
@jerrinot jerrinot self-assigned this Oct 31, 2017
@jerrinot
Copy link
Contributor

@jerrinot jerrinot commented Oct 31, 2017

hi @SzabKel,

thanks for reporting this. Having a project with a reproducer is always a massive help!

I think the instances of SomeDataObj you are observing are actually retained by your QueueStore implementation not by Hazelcast.

The queue has all an entry per each item stored, but they do not hold a reference to your object. I believe the entries are stored to preserve ordering and they are fixed in size - regardless of your object size. See this:
image

You can see 5 entries (=your configured memory limit) retain 264 bytes each - because they also retain your domain object (SomeDataObj) whilst the remaining 95 entries have just 40 bytes - they are basically just an envelope, without the object itself.

I am closing the ticket for now, feel free to re-open it for further clarification.

@jerrinot jerrinot closed this Oct 31, 2017
@SzabKel
Copy link
Author

@SzabKel SzabKel commented Oct 31, 2017

Thanks @jerrinot, does this means, all the headers will be stored in every Node's Queue (as the Queue is not a partitioned implementation of the Map)? So If I would like to store a few million entries in my Queue (with a much lower memory-limit), I could still potentially overflow memory in some of my Nodes?
Also, If there is no data stored, how could I see the actual text values for objects where the id > 5? (See my screenshot).

@jerrinot
Copy link
Contributor

@jerrinot jerrinot commented Oct 31, 2017

The headers will be stored on 1+x nodes where x is your backup count (=1 by default)

You are right the queue is not partitioned. Instead a queue name is used to select a member owning the queue. All entries (envelopes) from this queue will be stored on the same member.

The envelope contains the item ID. The very same ID is used when storing an item into a queue store. When you call queue.poll() and the item is just an empty envolope then Hazelcast will invoke load(id) to load the payload from the store.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
3 participants
You can’t perform that action at this time.