-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 2105123: tuned: disable irqbalance #396
Bug 2105123: tuned: disable irqbalance #396
Conversation
d4d5210
to
37845e7
Compare
assets/performanceprofile/scripts/clear-irqbalance-banned-cpus.sh
Outdated
Show resolved
Hide resolved
pkg/performanceprofile/controller/performanceprofile/components/assets_scripts_test.go
Outdated
Show resolved
Hide resolved
assets/performanceprofile/scripts/clear-irqbalance-banned-cpus.sh
Outdated
Show resolved
Hide resolved
pkg/performanceprofile/controller/performanceprofile/components/assets_scripts_test.go
Outdated
Show resolved
Hide resolved
37845e7
to
536df06
Compare
thanks @jmencak for the early review. I want to add at least a e2e test to cover the "tuned restart while pods running" scenario and then I'll lift the WIP. |
70b850e
to
e0252d7
Compare
content wise ready. Now testing and checking the e2e test. |
e0252d7
to
5583284
Compare
/retest |
the PR is ready for review. I'm happy enough with the e2e test, but suggestions very welcome. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the PR and the changes so far, Francesco! This looks good to me with the exceptions of some nits, but it might be useful if Yanir/Martin could have a look also from PAO side.
test/e2e/performanceprofile/testdata/render-expected-output/manual_tuned.yaml
Outdated
Show resolved
Hide resolved
pkg/performanceprofile/controller/performanceprofile/components/utils.go
Outdated
Show resolved
Hide resolved
5583284
to
c31e736
Compare
/hold |
/retest |
proposed crio fix: cri-o/cri-o#6146 since this change needs to be backportable, I'll work on a workaround on the e2e tests. |
4821953
to
c1e9c2e
Compare
Testing the irqbalance handling on tuned restart highlighted a crio bug when handling odd-numbered cpu affinity masks. We expect this bug to have little impact on production environments because it's unlikely they will have a number of CPUs multiple by 4 but not multiple by 8, but still we filed cri-o/cri-o#6145 In order to have a backportable change, we fix our utility code to deal with incorrect padding. Signed-off-by: Francesco Romani <fromani@redhat.com>
c1e9c2e
to
eae8b1e
Compare
@fromanirh: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Added some small nit questions but given the discussion above and the crio bug discovered this LTGM. |
/hold |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: bartwensley, fromanirh, MarSik The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/bugzilla refresh |
@MarSik: This pull request references Bugzilla bug 2105123, which is valid. The bug has been moved to the POST state. 3 validation(s) were run on this bug
Requesting review from QA contact: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/hold cancel |
@fromanirh: All pull requests linked via external trackers have merged: Bugzilla bug 2105123 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/cherry-pick release-4.11 |
@MarSik: new pull request created: #435 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
* test: perfprof: utils: make sure to unquote cpus Make sure to unquote the cpumask output to prevent false negatives Signed-off-by: Francesco Romani <fromani@redhat.com> * perfprof: utils: robustness fixes Add testcases and log enhancement emerged during the local testing. Signed-off-by: Francesco Romani <fromani@redhat.com> * perfprof: tuned: disable the irqbalance plugin The tuned irqbalance plugin clears the irqbalance banned CPUs list when tuned starts. The list is then managed dynamically by the runtime handlers. On node restart, the tuned pod can be started AFTER the workload pods (kubelet nor kubernetes offers ordering guarantees when recovering the node state); clearing the banned CPUs list while pods are running and compromising the IRQ isolation guarantees. Same holds true if the NTO pod restarts for whatever reason. To prevent this disruption, we disable the irqbalance plugin entirely. Another component in the stack must now clear the irqbalance cpu ban list once per node reboot. Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2105123 Signed-off-by: Francesco Romani <fromani@redhat.com> * e2e: perfprof: add tests for cpu ban list handling Add a test to verify that a tuned restart will not clear the irqbalance cpu ban list, which is the key reason why we disabled the irqbalance tuned plugin earlier. Note there is no guarantee that any component in the stack will reset the irqbalance copu ban list once. crio inconditionally does a restore depending on a snapshot taken the first time the server runs, which is likely but not guaranteed to be correct. There's no way to declare or check the content of the value crio would reset. Signed-off-by: Francesco Romani <fromani@redhat.com> * perfprof: functests: utils: fix cpu mask padding Testing the irqbalance handling on tuned restart highlighted a crio bug when handling odd-numbered cpu affinity masks. We expect this bug to have little impact on production environments because it's unlikely they will have a number of CPUs multiple by 4 but not multiple by 8, but still we filed cri-o/cri-o#6145 In order to have a backportable change, we fix our utility code to deal with incorrect padding. Signed-off-by: Francesco Romani <fromani@redhat.com> Signed-off-by: Francesco Romani <fromani@redhat.com>
* test: perfprof: utils: make sure to unquote cpus Make sure to unquote the cpumask output to prevent false negatives Signed-off-by: Francesco Romani <fromani@redhat.com> * perfprof: utils: robustness fixes Add testcases and log enhancement emerged during the local testing. Signed-off-by: Francesco Romani <fromani@redhat.com> * perfprof: tuned: disable the irqbalance plugin The tuned irqbalance plugin clears the irqbalance banned CPUs list when tuned starts. The list is then managed dynamically by the runtime handlers. On node restart, the tuned pod can be started AFTER the workload pods (kubelet nor kubernetes offers ordering guarantees when recovering the node state); clearing the banned CPUs list while pods are running and compromising the IRQ isolation guarantees. Same holds true if the NTO pod restarts for whatever reason. To prevent this disruption, we disable the irqbalance plugin entirely. Another component in the stack must now clear the irqbalance cpu ban list once per node reboot. Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2105123 Signed-off-by: Francesco Romani <fromani@redhat.com> * e2e: perfprof: add tests for cpu ban list handling Add a test to verify that a tuned restart will not clear the irqbalance cpu ban list, which is the key reason why we disabled the irqbalance tuned plugin earlier. Note there is no guarantee that any component in the stack will reset the irqbalance copu ban list once. crio inconditionally does a restore depending on a snapshot taken the first time the server runs, which is likely but not guaranteed to be correct. There's no way to declare or check the content of the value crio would reset. Signed-off-by: Francesco Romani <fromani@redhat.com> * perfprof: functests: utils: fix cpu mask padding Testing the irqbalance handling on tuned restart highlighted a crio bug when handling odd-numbered cpu affinity masks. We expect this bug to have little impact on production environments because it's unlikely they will have a number of CPUs multiple by 4 but not multiple by 8, but still we filed cri-o/cri-o#6145 In order to have a backportable change, we fix our utility code to deal with incorrect padding. Signed-off-by: Francesco Romani <fromani@redhat.com> Signed-off-by: Francesco Romani <fromani@redhat.com>
The tuned irqbalance plugin clears the irqbalance banned CPUs list when tuned starts. The list is then managed dynamically by the runtime handlers.
On node restart, the tuned pod can be started AFTER the workload pods (kubelet nor kubernetes offers ordering guarantees when recovering the node state); clearing the banned CPUs list while pods are running and compromising the IRQ isolation guarantees. Same holds true if the NTO pod restarts for whatever reason.
The fix is twofold:
in #413 we add e2e tests to verify the aforementioned steps.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2105123
Signed-off-by: Francesco Romani fromani@redhat.com