Skip to content
This repository has been archived by the owner on Feb 20, 2021. It is now read-only.

Working Set memory keeps growing and goes on with a pagefile till no space is left #466

Open
ArjanVanBuijtene opened this issue May 26, 2016 · 8 comments
Labels

Comments

@ArjanVanBuijtene
Copy link

We have a scenario where we constantly update many keys.
The behaviour is that the memory grows quickly, easily above 100 GB.
This is for the Redis Data, although using SCAN we also see a 125 GB use drop to 50 GB.
Next the moment Redis is started it start growing as Working Set memory.
We have 192 GB memory, but still grows on to take over 100 GB space for the pagefile.
We latest version 3 build.
We tried MaxMemoy, change from virtual to hardware direct, looking into changing the malloc and switching to Linux, but we believe this is not the solution.
We feel that the Redis software is having an issue with the memory handling.
We do use maxmemory-policy volatile-ttl as the policy as we have expire set on our keys.

Please let me know if and which more info is required to determine where the issue could be.

Brgds,
Arjan van Buijtene
Airtrade

@enricogior
Copy link

Hi @ArjanVanBuijtene
I'm not sure I fully understand the problem. Redis allocates the memory it needs, if the dataset keeps growing it will keep allocating memory. If you want to keep the dataset size below a certain threshold you need to follow the general Redis memory configuration guidelines: http://redis.io/topics/faq
Changing the current memory allocator (jemalloc) is really a bad idea since jemalloc has proven to be the best allocator for Redis in terms of low heap fragmentation. Switching to Linux won't really change the memory allocation behavior since Redis on Linux also uses jemalloc and the code that allocates new memory is the same for both the Linux and the Windows version. The only difference is that on Windows the allocated memory is backed-up by the system paging file in order to be able to simulate the Linux fork() command.

@ArjanVanBuijtene
Copy link
Author

Hi Enrico,

The mentioned guidelines do not work as we have tested in a Live environment.

Example:

Memory:

Redis-stats:

and the INFO:

And pagefile activity, besides the 172 GB (maxmemory on 140 Gb) already from the memory:

So something is causing the working set to keep on growing, regardless the settings in use.

Redis is the only software (besides redis-stats) we run on this server.

Anymore info you need to receive?

Kind regards,

Met vriendelijke groet,

Arjan van Buijtene

Information Architect

Data Technology

     +31 23 516 0289 

    +31 6  333 07882

                                                    Mail address:        Visit address: 

 avanbuijtene@airtrade.nl             P.O. box 5473,        Nobelstraat 19, 

 www.airtrade.com <http://www.airtrade.com/>                         2000 GL Haarlem,  2011 TX Haarlem, 

www.travelpackager.nl <http://www.travelpackager.nl/>                  The Netherlands    The Netherlands

From: Enrico Giordani [mailto:notifications@github.com]
Sent: Thursday, May 26, 2016 01:34 PM
To: MSOpenTech/redis
Cc: Buijtene, Arjan van; Mention
Subject: Re: [MSOpenTech/redis] Working Set memory keeps growing and goes on with a pagefile till no space is left (#466)

Hi @ArjanVanBuijtene https://github.com/ArjanVanBuijtene
I'm not sure I fully understand the problem. Redis allocates the memory it needs, if the dataset keeps growing it will keep allocating memory. If you want to keep the dataset size below a certain threshold you need to follow the general Redis memory configuration guidelines: http://redis.io/topics/faq
Changing the current memory allocator (jemalloc) is really a bad idea since jemalloc has proven to be the best allocator for Redis in terms of low heap fragmentation. Switching to Linux won't really change the memory allocation behavior since Redis on Linux also uses jemalloc and the code that allocates new memory is the same for both the Linux and the Windows version. The only difference is that on Windows the allocated memory is backed-up by the system paging file in order to be able to simulate the Linux fork() command.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub #466 (comment) https://github.com/notifications/beacon/ASqtlJxc6gw38fr-HX8FwSRoifOnPCLSks5qFYURgaJpZM4InbWh.gif

@enricogior
Copy link

@ArjanVanBuijtene
if you attached images, they didn't get through, you may want to try uploading the images directly on github.

@ArjanVanBuijtene
Copy link
Author

network
pagefile
redis-info
redis-stats

Herewith the pictures included, sorry, habid of replying directly from mail ;)

@enricogior
Copy link

@ArjanVanBuijtene
the working set is not the actual physical ram in use, it's the virtual space mapped to the system paging file.

@ArjanVanBuijtene
Copy link
Author

So the million dollar question is why virtual space goes up to 172 GB and then pagefile starts growing to 50-100 GB, while the Redis it self never reported a higher value as 130 GB and the maxmemory is 140 GB.
Can you see a scenario which could this behaviour?
Of course also to bring the virtual space down again requires a restart.
A No Go for production.

@enricogior
Copy link

@ArjanVanBuijtene
there are two scenarios in which the virtual address spaces grows and in turn causes Redis to reserve space in the system paging file:
1 - persistence/replication: both need to simulate the fork() API and both use the system paging file to implement the copy-on-write behavior, in the worse case it may require twice as much address space used by the main Redis process.
2 - heap fragmentation: Redis maxmemory only limits the amount of space used by the internal Redis objects, it doesn't effect the virtual address space. If Redis needs to allocate memory it asks for it to jemalloc that in turn calls the Windows memory manager that will always try to honor the request. Heap fragmentation is inevitable, it may change based on the user case, but it's always a factor and there are no solutions for it given that Redis is written in C and not in a language that uses garbage collection.

The default system paging file max size is 3.5 times the physical RAM, your system should be able to grow it up to 672 GB, so unless Redis reaches a value close to it, you shouldn't worry about it and there shouldn't be any reason to reboot the system to bring it down.

@ArjanVanBuijtene
Copy link
Author

Thx for your answers and the quick support, we are going to anticipate on this and upgrade and optimize where relevant our environment.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

2 participants