Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

regression? v5.1 doesn't seem to release memory back to the OS? #1398

Closed
oranagra opened this issue Dec 24, 2018 · 5 comments

Comments

@oranagra
Copy link

commented Dec 24, 2018

Hi,
I recently upgraded from v4.0.3 to v5.1 (using Redis).
From past experience I remember that when releasing memory, jemalloc was keeping some 12% (1/8) of the current usage for future use, instead of giving everything back to the OS.
But now, i notice that with v5.1 it doesn't seem to return anything back to the OS.
I assume i'm missing something major, but i can't seem to find it in the release notes, or recent commits / issues.

for example, empty process:

Allocated: 2112024, active: 2469888, metadata: 2909288 (n_thp 0), resident: 5656576, mapped: 11190272, retained: 4014080

Then after doing many allocations (of about 500MB), and then releasing most of them:

Allocated: 1962816, active: 4120576, metadata: 20155656 (n_thp 0), resident: 599027712, mapped: 602001408, retained: 32387072

In v4.0.3 the resident used to shrink back to about 14MB.

@interwq

This comment has been minimized.

Copy link
Member

commented Dec 24, 2018

Hi,

This looks to be caused by application idling combined with the time based decay introduced in 5.0. With the new decay design, jemalloc will return memory back to OS gradually over time. We observed efficiency wins with this feature in most applications. However one edge case is, if the application goes completely idle, there won’t be progress. The issue is amplified with time based decay, since when memory was just deallocated the decay is set to be slow (expecting reuse in near future).

The background thread feature should solve this, try adding ‘background_thread:true’ in the malloc conf. See more details in https://github.com/jemalloc/jemalloc/blob/dev/TUNING.md

@interwq interwq added the question label Dec 24, 2018
@oranagra

This comment has been minimized.

Copy link
Author

commented Dec 25, 2018

@antirez FYI, FLUSHALL doesn't immediately release memory to the OS.

@anysql

This comment has been minimized.

Copy link

commented Dec 25, 2018

Meet the same issue for fixed thread number application (a MySQL proxy), not just for idle application.

@interwq

This comment has been minimized.

Copy link
Member

commented Jan 18, 2019

The time based decay is triggered by active allocation activities; with the time based decay, when threads (or the underlying arena) go idle-ish, the dirty memory may queue longer than expected. As mentioned the background_thread:true option will solve this, also improve tail latency in many cases.

@interwq interwq closed this Jan 18, 2019
@interwq

This comment has been minimized.

Copy link
Member

commented Jan 18, 2019

As a side note, although in our experience time based decay performs better in most cases, we did discuss to combine time based with the previous ratio based decay, to better handle cases like when application goes idle. The time based decay does require regular allocation activities to do its best; the background_thread feature was partially motivated by this problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants
You can’t perform that action at this time.