-
Notifications
You must be signed in to change notification settings - Fork 38.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cpu.share is not properly set for containers in kubernetes #8358
Comments
Did you see a problem with memory as well if set? I swore I had verified this on the node a month or so ago, but maybe it was memory and not CPU. /sub Sent from my iPhone
|
Do we have an integration test that can verify This integration with docker and kubelet? Sent from my iPhone
|
From my testing this applies to both CPU and memory. Docker accepts the limit (and reports it), but it doesn't ask libcontainer to set the limit. @derekwaynecarr we do not have a test today, we need one :) |
I swore I checked memory a while back, and it works too. This is my first time checking cpu though. I teared down my cluster just now. Will bring up one and reported it back soon. We do have integration tests but running a fake docker daemon. We also have a bunch of e2e tests to make sure pods on node running as expected, but no resource related validation. I am filing an issue to add e2e test for this. |
Memory is not set either, all resource limits are dropped. |
filed #8365 |
Just found the issue, it is in go-dockerclient and how we use it. This has actually never worked I think :( We set the limits in the But the resource fields actually don't exist in the official Docker API. There is verbiage that states that it should be in What is even more confusing is that Docker inspect will happily show the config we provide with the unknown fields, but since they are not in Will file and fix the issue in go-dockerclient and then change our use to fill in |
The plot thickens, looks like it was a breaking change in the Docker remote API. The last version had the field on the request: https://docs.docker.com/reference/api/docker_remote_api_v1.17/#create-a-container |
This allows for backwards and forwards compatability since old Docker versions expect it in Create() and newer ones do so in Start(). Fixes kubernetes#8358
I were trying to use limitRange to set default cpu request for each container to prevent overcommit a node badly before we have more intelligent scheduler, and identified the issue.
Through kubernetes I scheduled a container asking for 400m, (409 cpu.share) but kubernetes always assigns 1024 cpu.share to that cgroup.
Initially I thought kubelet doesn't pass cpu limit properly to docker, but docker inspect proves I were wrong:
Then I thought it might be capped at docker level, but I tested it through docker cli, and proves I were wrong again:
It might be docker/libcontainer issue and triggered by the way how we are using docker itself. Filed the issue here for investigation. @vmarmol is going to look into it.
The text was updated successfully, but these errors were encountered: