New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add k8s resource limit patch lib #19
Conversation
33c79e8
to
bd99feb
Compare
lib/charms/observability_libs/v0/kubernetes_compute_resources_patch.py
Outdated
Show resolved
Hide resolved
There's a complication:
The Is there a way to force K8s to KEEP the same unit the user provided? @simskij @rbarry82 UPDATE: similarly, |
Not that I know off, but I'll dig. What happens if you instead of 0.9Gi set it to 966367641600m from the get-go? Does it still bug? |
Ok, so found it. The reason for this is that 0.9GiB (Gibibyte) in its canonical form (ie. without fractional digits, using the largest possible suffix, is
If we just disallow setting fractions in the first place, the problem will be solved. For instance, 900Mi will not get converted, while 0.9Gi will. Likely, 900Mi was also what the user tried to express when they put in 0.9Gi, which isn't entirely true as 0.9Gi = 6866,46Mi. |
There's also the case of |
lib/charms/observability_libs/v0/kubernetes_compute_resources_patch.py
Outdated
Show resolved
Hide resolved
lib/charms/observability_libs/v0/kubernetes_compute_resources_patch.py
Outdated
Show resolved
Hide resolved
lib/charms/observability_libs/v0/kubernetes_compute_resources_patch.py
Outdated
Show resolved
Hide resolved
lib/charms/observability_libs/v0/kubernetes_compute_resources_patch.py
Outdated
Show resolved
Hide resolved
lib/charms/observability_libs/v0/kubernetes_compute_resources_patch.py
Outdated
Show resolved
Hide resolved
lib/charms/observability_libs/v0/kubernetes_compute_resources_patch.py
Outdated
Show resolved
Hide resolved
Problem: when there is a single unit of prometheus and the user sets too high resource limits, then juju is stuck in
because the pod cannot be scheduled and there is no charm to take Any ideas @simskij @rbarry82? |
This is kind of a consistent mess with the kube scheduler. That is -- the kube scheduler is not aware of what else is happening on the system, and a trival process which just Inside kube itself, though... In general, kubelet itself can (and by best practice, does) reserve a certain amount of memory beyond which it will refuse to schedule because the kernel OOM killer may accidentally kill important things (like the kubelet, even, or anything else) otherwise, or swap it out. Inside the pod,
Lastly, you'd need to check on the pod itself, which may be further limited (if there's no resource limits set in the podspec, this is probably unnecessary paperwork, but regardless) via Trivially, checking If there isn't a reservation, then all you have to go on is whether a particular node (the one you're running on, maybe) has enough free memory, or you can check all of them, then 🤞 that some other pod which is pending or cannot be scheduled due to |
4268e7a
to
ba4b9d7
Compare
lib/charms/observability_libs/v0/kubernetes_compute_resources_patch.py
Outdated
Show resolved
Hide resolved
lib/charms/observability_libs/v0/kubernetes_compute_resources_patch.py
Outdated
Show resolved
Hide resolved
lib/charms/observability_libs/v0/kubernetes_compute_resources_patch.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Love the reduced complexity by dropping the scaling factor functionality! Good job!
Resolved and updated the loki PR accordingly.
Issue
Need to be able to limit resource usage of a charm.
Crossref: OPENG-272
Solution
limits
andrequests
.Context
NTA.
Testing Instructions
kubectl patch statefulset prom -n welcome -p '{"spec": {"template": {"spec": {"containers": [{"name":"prometheus", "resources": {"limits": {"cpu": "2", "memory":"2Gi"}, "requests": {"cpu": "2", "memory": "1Gi"}} }] }}}}'
Release Notes
Add k8s resource limit patch lib.