Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

exclude replication backlog when computing used memory for key eviction #4668

Closed

Conversation

0xtonyxia
Copy link
Contributor

Sometimes in order to complete a full resync on a write traffic burst situation, we need to increase the size of replication backlog.

I have encountered a situation where replication backlog is set to 4GB which caused most of the keys in database evicted. This may be beyond my expectation and other Redis users'.

So i think exclude the replication backlog when computing used memory is a better idea, after all replication log is kind of cache and shouldn't be considered as part of data set.

@oranagra
Copy link
Member

@0xtonyxia to the best of my understanding (although it might be just a guess), the reason we don't consider slave buffers for eviction is not the fact that they are not part of the actual dataset (i.e. there are many other things that are not part of the dataset and are not excluded).
The reason is that the slave buffers may grow during the eviction loop (adding DELs to the buffers), and this can lead to an endless eviction loop, or at the very least evicting more than was needed (since they'll be soon flushed and memory reclaimed).
But the replication backlog is usually constant, and present for the entire life of the process (and also usually rather small).

I think that as a user, if you suddenly increase the size of the backlog during the life of the process, you should also increase the maxmemory (to prevent eviction).

@0xtonyxia
Copy link
Contributor Author

Thanks @oranagra . I think your explanation makes sense. Currently only AOF and slave buffers are excluded. They all grow bigger when deleting keys for free memory.

The comments of freeMemoryGetNotCountedMemory() says that the eviction should use mostly data size, that's why i brought this PR. But like you said, there are still many other things which are not excluded, so maybe it's a good idea to increase the maxmemory when need to increase the size of the backlog too much, but i am not sure that people can always remember it. @antirez, can you help me confirm this? thank you.

@antirez
Copy link
Contributor

antirez commented Jul 17, 2018

Hello @0xtonyxia, yep this is exactly like @oranagra said, the problem is that normal Redis users do not know all these details, so when they set "maxmemory 4GB" they expect the server to use maximum 4GB. We already fail users, because fragmentation plus these things that we cannot include in the memory usage will already inflate the memory usage, and users will have some problem understanding why this is the case. The more we don't count into maxmemory, the worse it is from the POV of what users expect. So better to take things as they are :-) Cheers.

@0xtonyxia
Copy link
Contributor Author

I see. Thanks for your reply. @antirez

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants