-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A Pod becomes a memory hog in Kind (the issue seems Kind-specific) #2623
Comments
i think you will find the same bug in this is a known limitation, and is not trivial to fix. kind is not suitable for testing this sort of thing, unfortunately. further, if you add additional notes, the host resouces will be over reported (duplicated for each node). see #877 and linked issues from there. |
@BenTheElder no, I'm running this same manifest in Minikube v1.24.0 with the docker driver, and the container consumes just about 150Mi while the limit in the manifest is 512Mi. The problem is not in enforcing the limits, it's with the container going amok under Kind. Can you observe/reproduce this behavior? |
this sounds like #2597 I cannot just this moment. will revisit. |
I thought my Kind config could be useful: kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: office
containerdConfigPatches:
- |-
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."192.168.88.63:5000"]
endpoint = ["http://192.168.88.63:5000"]
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 8080
protocol: TCP
- containerPort: 443
hostPort: 8081
protocol: TCP
- role: worker
- role: worker |
thanks, I will try to replicate this soon, I suspect it's a variation on #2597 |
I have not seen this behavior under Kind with any images other than the haproxy image. |
hmm ... #760 ...
some applications preallocate based on the # file descriptors, check the thread to see if this is the same please |
I agree with containerd, we should not cap the limits, because everybody can have different ones for different reasons ... other thing is that some software allocate a lot of memory based on them by default or some distros have very big values ... |
/close duplicate of #760 Thanks |
@aojea: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
See Antonio's comments following:
In more detail: In #760 we set the file handle limit on containerd (which your pods run under) to match upstream containerd. They've since reverted that change and set limits to infinity, which we match, in #1799 mentioned above (and then commented on in #2623 (comment)). Setting limits is problematic, because its host/workload dependent. Ideally haproxy would not excessively consume file handles. I haven't had a chance to dig in to see what mitigations are available yet. I might argue this is closer to #2597, where a library used by NFS server behaved similarly poorly after systemd 240+ started raising the file handle limit https://bugzilla.redhat.com/show_bug.cgi?id=1796545 That bug dates to 2020, and more recent NFS / underlying library should be patched (#1487 (comment)), but in #2597 the host is a bit more dated. In this case it's less clear, somewhere between host config, kind/containerd/systemd, and the workload. |
I think that is haproxy, isn't it?
|
The same haproxy image behaves quite well under Minikube, for example. Or in a real kOps-managed cluster. I've tried this path: docker-library/haproxy#179 but the expected reply is "it's not a haproxy problem". |
What happened:
A very simple pod turns into a memory hog on a 3-node Kind cluster. This sample haproxy pod (see the manifest below) rapidly consumes as much memory as limits permit and starts consuming swap (and if there is no limit configured, it easily consumes gigabytes of memory).
What you expected to happen:
I expect this particular image to consume about 150Mi of RAM as it happens in Minikube and in a kOps/AWS cluster.
How to reproduce it (as minimally and precisely as possible):
Please apply the attached manifest and watch the pod's memory usage.
Anything else we need to know?:
The problem is reproducible only in Kind that's why I'm filing the bug here as it may be specific to Kind. Minikube and kOps are not affected.
Environment:
kind version
): kind v0.11.1 go1.17.6 linux/amd64.kubectl version
): Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"clean", BuildDate:"2021-05-21T23:01:33Z", GoVersion:"go1.16.4", Compiler:"gc", Platform:"linux/amd64"}docker info
): containerd version: 1407cab509ff0d96baa4f0eb6ff9980270e6e620.m, runc version: v1.0.3-0-gf46b6ba2, init version: de40ad0/etc/os-release
): Manjaro Linux Qonos 21.2.2test.yaml.txt
The text was updated successfully, but these errors were encountered: