Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

containers using MADV_FREE do not see their memory usage decrease #2242

Closed
sylr opened this issue May 15, 2019 · 7 comments
Closed

containers using MADV_FREE do not see their memory usage decrease #2242

sylr opened this issue May 15, 2019 · 7 comments

Comments

@sylr
Copy link

sylr commented May 15, 2019

I've a bunch of containers running Thanos (a golang app which proxy prometheus queries).

The latest version of Thanos has been compiled with go 1.12 which use MADV_FREE to release memory back to the system instead of MADV_DONTNEED in go 1.11.

Unfortunately it seems that cadvisor does not see the memory being released with MADV_FREE.

Here a graph showing container_memory_usage_bytes after switching back to MADV_DONTNEED instead of MADV_FREE using GODEBUG=madvdontneed=1.

Screenshot_65

I don't know if cadvisor can do something about it though.

@bwplotka
Copy link

Related change for Golang: golang/go#23687 but did not have a chance to dive into what this option does / what's expected. The effect is quite scary though.

@bwplotka
Copy link

prometheus/prometheus#5524 seems like Go 1.12.5 has this fixed

@dashpole
Copy link
Collaborator

I don't think cAdvisor can do anything here. cAdvisor just reports the value from cgroup files. This is either a golang issue, as suggested by @bwplotka above, or a kernel issue.

@bwplotka
Copy link

bwplotka commented May 30, 2019

Actually there is a fix. Use container_memory_working_set_bytes metric instead (: See update below, my understaning of WSS was obviously wrong.

cc @sylr

@bwplotka
Copy link

bwplotka commented Jun 2, 2019

After couple of tests I am no longer sure if we can rely on container_memory_working_set_bytes as "memory saturation of the container" metric. There are many people that uses this as a reference in alerts etc but I can see some weird results:

Actual allocated memory on the heap of 2 Golang processes in each container:
(095)

container_memory_working_set_bytes is showing magic numbers:
image

For fun (because this is misleading, but works as expected) container_memory_usage_bytes:
image

Wonder if my signals are weird (heap being larger than WSS) due to some different scrape internal etc so some spike being missed.

cc @sylr @gouthamve

@sylr
Copy link
Author

sylr commented Jun 3, 2019

@bwplotka I wouldn't trust go_memstats_... as I find it particularly hard to understand what they really represent (see: golang/go#32284).

@bwplotka
Copy link

bwplotka commented Jun 9, 2019

Wrote a post about this in details here: https://bwplotka.dev/2019/golang-memory-monitoring/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants