Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes VPA not working with fractional quotas #78

Closed
wallee94 opened this issue Oct 12, 2023 · 3 comments
Closed

Kubernetes VPA not working with fractional quotas #78

wallee94 opened this issue Oct 12, 2023 · 3 comments

Comments

@wallee94
Copy link
Contributor

I found an issue when using automaxprocs in Kubernetes pods managed and autoscaled by a VPA.

For containers with a fractional CPU limit between 1 and 2 cores, the current implementation rounds down GOMAXPROCS to 1, which means the container will never use more than 1s of CPU because it has a single active thread, and the VPA won't scale it up because it still has resources available.

For some components that depend on automaxprocs, like prometheus, it would be helpful to have an option to round up the quota instead of down to allow triggering autoscales when reaching the threshold.

Technically, any fractional number of CPUs would have the same problem, but a higher number of cores increases the probability that the VPA threshold is below the quota rounded down. The problem is more frequent in pods with fractional CPUs between 1 and 2.

@tocrafty
Copy link

We share the same requirement.

#13 altered the default behavior from ceil to floor. I dought throttling is real matter here. If a process is frequently throttled, we should consider allocating more CPUs for container. Running out of CPUs for the go service doesn not necessary imply depriving other services of CPUs. The OS will distribute CPUs between them.

#14 try to add an option. Unfortunately, the author closed it due to prolonged inactivity.

I hope #79 can be accepted as soon as posible.

@sywhang
Copy link
Contributor

sywhang commented Nov 16, 2023

Hey @wallee94 and @tocrafty .

Thanks for bringing this issue up, I'll get to reviewing #79 (and hopefully get that merged) as soon as possible.

@wallee94
Copy link
Contributor Author

#79 has been merged ✅

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants