Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP

Loading…

Redis takes up *more* memory when deleting a lot of keys #435

Closed
ehudros opened this Issue · 5 comments

2 participants

@ehudros

We have a medium sized Redis server taking up about 13gb of RAM (holding ~15 million keys).
A lot of these keys belong to "guest" users who are no longer active and need to be actively deleted in order to free up memory for new ones.
Our process uses a large set holding about 3 million strings, which are in fact keys of guest user objects the system has created. Every couple of hours we run a script that iterates over these keys, fetches their data and deletes the key (and the pointer set entry) if certain conditions are met.
It all seemed to work fine until today, where a weird issue popped up. When the script runs, it starts deleting a lot of keys (as usual). However, instead of seeing Redis' memory footprint go down, we actually see it increase considerably - over a gigabyte of memory is taken for every few hundred thousand keys that are deleted. We are currently not able to run a complete run as we have to kill up the script to prevent Redis from hitting its memory limit.
When we take down the server and rebuild Redis from the AOF file its size shrinks back to a lower, more reasonable size. There have been no code changes to the script and it only runs delete and srem commands (no writes at all).
Is there anything we can do to investigate this further? What could possibly cause such a behavior?

Thanks! :)

@antirez
Owner

Hi, unfortunately the report without informations like Redis version, INFO output, and configuration file is not enough! Thanks.

@ehudros

You're right, sorry for not providing that in my original post.
Here's the output of INFO:
https://gist.github.com/2311051

Our config file:
https://gist.github.com/2311080

Could this be related to running srem a lot on a very large set?

@antirez
Owner

SREM is surely not the problem... the first issue is that this Redis instance is too old :)
There are many known problems in this release. However I think that the issue here is not a Redis bug.

So a few more questions:

  • Is the memory problem only triggered by the script execution?
  • Does reported used memory returns back to its normal amount if you stop the script? (used_memory filed of INFO).
  • What is the output of CLIENT LIST while the script is running? (assuming you can run the script for a few second to capture it).

Thanks.

@ehudros

hmm... I can't reproduce the issue again. - running the script doesn't seem to have the same effect anymore.
Really weird stuff :)
If I manage to reproduce I'll reopen this ticket, closing it for now.

@ehudros ehudros closed this
@antirez
Owner

@ehudros thank you, sometimes this problems are related to clients not consuming the reply they get from the server (so the reply accumulates on server side) or due to very bit MULTI/EXEC blocks that are never terminated.
You can understand many of this issues checking the CLIENT LIST output while the problem happens.

Cheers,
Salvatore

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.