-
Notifications
You must be signed in to change notification settings - Fork 167
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kernel consumption too high #1279
Comments
I suspect this is a overflow issue. Do you have this issue before or only during this period? |
@jochen-schuettler thanks for investigating this.
|
@sunya-ch the power models are using |
(no change by v0.7.8) |
@sunya-ch : see above, any idea ? |
The issue is still open. @marceloamaral , @rootfs : Can I help you in any way to investigate ? |
Additional Info: The VM is running on a AMD EPYC 7R13 Processor (stepping 1). This is not included in cpus.yaml. So maybe the problem is related to that? |
Maybe there are some overflow during extended tests? @vprashar2929 have you seen this in any tests? |
@jochen-schuettler can you get prometheus stats |
We have occurences regardless of the kernel namespace in a local setup in both BM and VM. Looking at bpf_cpu_time_ms there reveals rapid rises over some minutes. kepler 0.7.2 seems ok, yes. |
What happened?
The measured consumption for the kernel is much too high, beyond what the machine is physically capable of.
![grafik](https://private-user-images.githubusercontent.com/115154710/308991691-8e1f34a7-9189-46c1-9840-7aafd70c980f.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTkyMzMyNjYsIm5iZiI6MTcxOTIzMjk2NiwicGF0aCI6Ii8xMTUxNTQ3MTAvMzA4OTkxNjkxLThlMWYzNGE3LTkxODktNDZjMS05ODQwLTdhYWZkNzBjOTgwZi5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjQwNjI0JTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI0MDYyNFQxMjQyNDZaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT01MWI0YTRlYThjNWIyMjIyOGMyOGZhYzA3ZGExNjZkYWI5YjJlNWFiNzk5NjEyNjQyOWNiODE1NDBhMWM5MTEyJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCZhY3Rvcl9pZD0wJmtleV9pZD0wJnJlcG9faWQ9MCJ9.VZAb-9xYQ0TErRO7Cvo8uLxpAUluF0UYoQyAKPnHWk0)
What did you expect to happen?
Correct measurement for kernel.
How can we reproduce it (as minimally and precisely as possible)?
Install kepler 0.7.7 into AWS ECS. Observe measured kernel consumption over some days.
Anything else we need to know?
No response
Kepler image tag
Kubernetes version
Cloud provider or bare metal
OS version
Install tools
Kepler deployment config
Container runtime (CRI) and version (if applicable)
No response
Related plugins (CNI, CSI, ...) and versions (if applicable)
The text was updated successfully, but these errors were encountered: