Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

containers using MADV_FREE do not see their memory usage decrease #2242

Closed
sylr opened this issue May 15, 2019 · 7 comments

Comments

@sylr
Copy link

commented May 15, 2019

I've a bunch of containers running Thanos (a golang app which proxy prometheus queries).

The latest version of Thanos has been compiled with go 1.12 which use MADV_FREE to release memory back to the system instead of MADV_DONTNEED in go 1.11.

Unfortunately it seems that cadvisor does not see the memory being released with MADV_FREE.

Here a graph showing container_memory_usage_bytes after switching back to MADV_DONTNEED instead of MADV_FREE using GODEBUG=madvdontneed=1.

Screenshot_65

I don't know if cadvisor can do something about it though.

@bwplotka

This comment has been minimized.

Copy link

commented May 15, 2019

Related change for Golang: golang/go#23687 but did not have a chance to dive into what this option does / what's expected. The effect is quite scary though.

@bwplotka

This comment has been minimized.

Copy link

commented May 15, 2019

prometheus/prometheus#5524 seems like Go 1.12.5 has this fixed

@dashpole

This comment has been minimized.

Copy link
Collaborator

commented May 15, 2019

I don't think cAdvisor can do anything here. cAdvisor just reports the value from cgroup files. This is either a golang issue, as suggested by @bwplotka above, or a kernel issue.

@dashpole dashpole closed this May 15, 2019

@bwplotka

This comment has been minimized.

Copy link

commented May 30, 2019

Actually there is a fix. Use container_memory_working_set_bytes metric instead (: See update below, my understaning of WSS was obviously wrong.

cc @sylr

@bwplotka

This comment has been minimized.

Copy link

commented Jun 2, 2019

After couple of tests I am no longer sure if we can rely on container_memory_working_set_bytes as "memory saturation of the container" metric. There are many people that uses this as a reference in alerts etc but I can see some weird results:

Actual allocated memory on the heap of 2 Golang processes in each container:
(095)

container_memory_working_set_bytes is showing magic numbers:
image

For fun (because this is misleading, but works as expected) container_memory_usage_bytes:
image

Wonder if my signals are weird (heap being larger than WSS) due to some different scrape internal etc so some spike being missed.

cc @sylr @gouthamve

@sylr

This comment has been minimized.

Copy link
Author

commented Jun 3, 2019

@bwplotka I wouldn't trust go_memstats_... as I find it particularly hard to understand what they really represent (see: golang/go#32284).

@bwplotka

This comment has been minimized.

Copy link

commented Jun 9, 2019

Wrote a post about this in details here: https://bwplotka.dev/2019/golang-memory-monitoring/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants
You can’t perform that action at this time.