New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docker stats shows MEM USAGE of 16EiB -- that's exactly uint64.Max #42140
Comments
We get this info from the kernel, but an average is calculated, and I wonder if it would (e.g.) return There's also some calculation happening in the CLI to remove cache from usage; For docker 19.03.12; https://github.com/docker/cli/blob/v19.03.12/cli/command/container/stats_helpers.go#L227-L231 // calculateMemUsageUnixNoCache calculate memory usage of the container.
// Page cache is intentionally excluded to avoid misinterpretation of the output.
func calculateMemUsageUnixNoCache(mem types.MemoryStats) float64 {
return float64(mem.Usage - mem.Stats["cache"])
} And docker 20.10 (which added cgroupv2); https://github.com/docker/cli/blob/v20.10.5/cli/command/container/stats_helpers.go#L227-L249 // calculateMemUsageUnixNoCache calculate memory usage of the container.
// Cache is intentionally excluded to avoid misinterpretation of the output.
//
// On cgroup v1 host, the result is `mem.Usage - mem.Stats["total_inactive_file"]` .
// On cgroup v2 host, the result is `mem.Usage - mem.Stats["inactive_file"] `.
//
// This definition is consistent with cadvisor and containerd/CRI.
// * https://github.com/google/cadvisor/commit/307d1b1cb320fef66fab02db749f07a459245451
// * https://github.com/containerd/cri/commit/6b8846cdf8b8c98c1d965313d66bc8489166059a
//
// On Docker 19.03 and older, the result was `mem.Usage - mem.Stats["cache"]`.
// See https://github.com/moby/moby/issues/40727 for the background.
func calculateMemUsageUnixNoCache(mem types.MemoryStats) float64 {
// cgroup v1
if v, isCgroup1 := mem.Stats["total_inactive_file"]; isCgroup1 && v < mem.Usage {
return float64(mem.Usage - v)
}
// cgroup v2
if v := mem.Stats["inactive_file"]; v < mem.Usage {
return float64(mem.Usage - v)
}
return float64(mem.Usage)
} |
I guess it would be useful (if possible) to get the data that's returned by the API to see what value is causing the issue 🤔 |
Just to clarify, you mean data from the (Note: the issue is intermittent so we'll need to catch it.) |
Yes, correct; it would give a datapoint where the wrong value is coming from (bug in the cli doing the calculation, incorrect / incomplete data returned by the daemon (or containerd / kernel). |
Whoops... I run into the same problem. Output of
Output of
Output of
memory data from the
|
Description
docker stats
is intermittently returning 16EiB (16 exbibytes) of MEM USAGEThis directly corresponds to the value of golang's max unit64: 18446744073709551615:
Steps to reproduce the issue:
Not clear; this is happening intermittently.
The containers are managed by AWS ECS.
First noticed the memory usage spikes in CloudWatch, the jump representing a change of approximately 10^6 in magnitude.
We configured our instance logging to separately capture the direct output of the
docker stats
command on the instance when we see the anomaly.A snippet of the
docker stats
output is pasted above (with redacted container-id and name).Describe the results you received:
When running
docker stats
, we intermittently see the MEM USAGE spike to 16EiBDescribe the results you expected:
Our memory limit is 256MiB, (we're expecting less than that).
Additional information you deem important (e.g. issue happens only occasionally):
The issue happens occasionally.
Output of
docker version
:Output of
docker info
:Additional environment details (AWS, VirtualBox, physical, etc.):
AWS ECS managed containers
The text was updated successfully, but these errors were encountered: