Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

high mem_fragmentation_ratio #7741

Open
lw5885799 opened this issue Sep 2, 2020 · 6 comments
Open

high mem_fragmentation_ratio #7741

lw5885799 opened this issue Sep 2, 2020 · 6 comments

Comments

@lw5885799
Copy link

version: 5.0.2
problem: high mem_fragmentation_ratio

info memory:
used_memory_human: 24.69M
used_memory_rss_human: 236.4M
mem_fragmentation_ratio: 9.59
mem_allocator: libc

I checked some article. Usually, the mem_fragmentation_ratio is between (1- 1.6). I have no idea why it happenned.
how can I get log or other way to find the cause?

@oranagra
Copy link
Member

oranagra commented Sep 2, 2020

@lw5885799 We currently have limited visibility into the internals of libc malloc (a lot more details when using jemalloc).
Maybe we can figure something out by looking at /proc/<pid>/smaps

@lw5885799
Copy link
Author

@oranagra is there any reference to analyze the proc//smaps. I know few about this memory log. I mean how to find the cause of high mem_fragmentation_ratio by the smaps log.

@oranagra
Copy link
Member

oranagra commented Sep 7, 2020

just see which mapping consumes the majority of RSS and if you can conclude that's the libc allocator heap or not.

maybe you'll be able to prove that it's related to some other library that got loaded, or maybe that's a memory mapped file.

if that's the allocator then you need to find a way to look into it's internals.
for jemalloc we have je_malloc_stats_print which exposes a lot of data. not sure what is the equivalent for libc malloc, but if you find one, feel free to make a PR for the MEMORY MALLOC-STATS command..

btw, why not use jemalloc?

@lw5885799
Copy link
Author

Thanks. I will try this. That guy who building the cluster accidently changed the allocator in product environment. We have changed allocator to jemalloc. Still, want to know what happened

@lw5885799
Copy link
Author

I use the memory doctor command in a 4.65 mem_fragmentation_ratio cluster. It says after a memory peak, mem_fragmentation_ratio would be high. Also, it says it is a normal and harmless isuess. As long as I fill it up, mem_fragmentation_ratio goes down. It happened with a jemalloc allocator. I wonder if malloc would lead to the same thing?

My old cluster with malloc had a used_memory_rss 5.5GB used_memory_peak 4.9GB and used_memory 39MB. The mem_fragmentation_ratio reached to 143. Is it still a normal thing?

@oranagra
Copy link
Member

if the rss matches the peak memory, it could just be fragmentation, or some issue with the allcator.
but the numbers seems very unlikely. if we allocated 5gb, and then released 90% of that memory, i assume there must be some pages that the allocator can return to the OS.
in other words, it seems unlikely that it'll keep so much memory as rss.

it's hard to tell for sure because we don't have a lot of info with libc allocator.
p.s. when using jemalloc, redis has an activedefrag feature that can solve this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants