Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug 2105123: tuned: disable irqbalance #396

Merged
merged 5 commits into from
Aug 17, 2022

Conversation

ffromani
Copy link
Contributor

@ffromani ffromani commented Jul 20, 2022

The tuned irqbalance plugin clears the irqbalance banned CPUs list when tuned starts. The list is then managed dynamically by the runtime handlers.

On node restart, the tuned pod can be started AFTER the workload pods (kubelet nor kubernetes offers ordering guarantees when recovering the node state); clearing the banned CPUs list while pods are running and compromising the IRQ isolation guarantees. Same holds true if the NTO pod restarts for whatever reason.

The fix is twofold:

  1. disable the irqbalance plugin, so tuned doesn't clear the irqbalance banned list anymore.
  2. depend on crio to reset the banned CPU list earlier in the boot

in #413 we add e2e tests to verify the aforementioned steps.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2105123
Signed-off-by: Francesco Romani fromani@redhat.com

@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jul 20, 2022
@openshift-ci openshift-ci bot requested review from jmencak and kpouget July 20, 2022 09:21
@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jul 20, 2022
@ffromani
Copy link
Contributor Author

thanks @jmencak for the early review. I want to add at least a e2e test to cover the "tuned restart while pods running" scenario and then I'll lift the WIP.

@ffromani ffromani force-pushed the stable-banned-cpus branch 6 times, most recently from 70b850e to e0252d7 Compare July 21, 2022 14:00
@ffromani
Copy link
Contributor Author

content wise ready. Now testing and checking the e2e test.

@ffromani
Copy link
Contributor Author

/retest

@ffromani ffromani changed the title WIP: tuned: disable irqbalance, handle ban cpus earlier tuned: disable irqbalance, handle ban cpus earlier Jul 22, 2022
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jul 22, 2022
@ffromani
Copy link
Contributor Author

the PR is ready for review. I'm happy enough with the e2e test, but suggestions very welcome.

@ffromani
Copy link
Contributor Author

/cc @yanirq @MarSik

@openshift-ci openshift-ci bot requested review from MarSik and yanirq July 22, 2022 10:32
Copy link
Contributor

@jmencak jmencak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the PR and the changes so far, Francesco! This looks good to me with the exceptions of some nits, but it might be useful if Yanir/Martin could have a look also from PAO side.

@ffromani
Copy link
Contributor Author

/hold
I was made aware of https://github.com/cri-o/cri-o/blob/v1.24.1/internal/runtimehandlerhooks/high_performance_hooks.go#L360 (which I skimmed over before) - so the new unit may be unnecessary (not harmful, but redundant). I'll issue a reduced PR with the e2e test to test if this crio functionality is enough

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jul 22, 2022
@ffromani
Copy link
Contributor Author

/retest

@ffromani
Copy link
Contributor Author

proposed crio fix: cri-o/cri-o#6146

since this change needs to be backportable, I'll work on a workaround on the e2e tests.

Testing the irqbalance handling on tuned restart highlighted
a crio bug when handling odd-numbered cpu affinity masks.
We expect this bug to have little impact on production environments
because it's unlikely they will have a number of CPUs multiple by 4
but not multiple by 8, but still we filed
cri-o/cri-o#6145

In order to have a backportable change, we fix our utility code
to deal with incorrect padding.

Signed-off-by: Francesco Romani <fromani@redhat.com>
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Aug 11, 2022

@fromanirh: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@yanirq
Copy link
Contributor

yanirq commented Aug 15, 2022

Added some small nit questions but given the discussion above and the crio bug discovered this LTGM.
Feel free to remove hold if it seems fit.
/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Aug 15, 2022
@yanirq
Copy link
Contributor

yanirq commented Aug 15, 2022

/hold

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Aug 17, 2022

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: bartwensley, fromanirh, MarSik

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@MarSik
Copy link
Contributor

MarSik commented Aug 17, 2022

/bugzilla refresh

@openshift-ci openshift-ci bot added bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. and removed bugzilla/invalid-bug Indicates that a referenced Bugzilla bug is invalid for the branch this PR is targeting. labels Aug 17, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Aug 17, 2022

@MarSik: This pull request references Bugzilla bug 2105123, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.12.0) matches configured target release for branch (4.12.0)
  • bug is in the state NEW, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)

Requesting review from QA contact:
/cc @gsr-shanks

In response to this:

/bugzilla refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci openshift-ci bot requested a review from gsr-shanks August 17, 2022 12:51
@ffromani
Copy link
Contributor Author

/hold cancel
agreed offline to move forward

@openshift-ci openshift-ci bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Aug 17, 2022
@openshift-merge-robot openshift-merge-robot merged commit c5cf0bd into openshift:master Aug 17, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Aug 17, 2022

@fromanirh: All pull requests linked via external trackers have merged:

Bugzilla bug 2105123 has been moved to the MODIFIED state.

In response to this:

Bug 2105123: tuned: disable irqbalance

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@ffromani ffromani deleted the stable-banned-cpus branch August 17, 2022 13:08
@MarSik
Copy link
Contributor

MarSik commented Aug 18, 2022

/cherry-pick release-4.11

@openshift-cherrypick-robot

@MarSik: new pull request created: #435

In response to this:

/cherry-pick release-4.11

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

IlyaTyomkin pushed a commit to IlyaTyomkin/cluster-node-tuning-operator that referenced this pull request May 23, 2023
* test: perfprof: utils: make sure to unquote cpus

Make sure to unquote the cpumask output to prevent false negatives

Signed-off-by: Francesco Romani <fromani@redhat.com>

* perfprof: utils: robustness fixes

Add testcases and log enhancement emerged during the local testing.

Signed-off-by: Francesco Romani <fromani@redhat.com>

* perfprof: tuned: disable the irqbalance plugin

The tuned irqbalance plugin clears the irqbalance banned CPUs list
when tuned starts. The list is then managed dynamically by the runtime
handlers.

On node restart, the tuned pod can be started AFTER the workload pods
(kubelet nor kubernetes offers ordering guarantees when recovering the
node state); clearing the banned CPUs list while pods are running and
compromising the IRQ isolation guarantees. Same holds true if the
NTO pod restarts for whatever reason.

To prevent this disruption, we disable the irqbalance plugin entirely.
Another component in the stack must now clear the irqbalance cpu ban
list once per node reboot.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2105123
Signed-off-by: Francesco Romani <fromani@redhat.com>

* e2e: perfprof: add tests for cpu ban list handling

Add a test to verify that a tuned restart will not clear
the irqbalance cpu ban list, which is the key reason
why we disabled the irqbalance tuned plugin earlier.

Note there is no guarantee that any component in the stack
will reset the irqbalance copu ban list once.

crio inconditionally does a restore depending on a snapshot
taken the first time the server runs, which is likely
but not guaranteed to be correct.

There's no way to declare or check the content of the value
crio would reset.

Signed-off-by: Francesco Romani <fromani@redhat.com>

* perfprof: functests: utils: fix cpu mask padding

Testing the irqbalance handling on tuned restart highlighted
a crio bug when handling odd-numbered cpu affinity masks.
We expect this bug to have little impact on production environments
because it's unlikely they will have a number of CPUs multiple by 4
but not multiple by 8, but still we filed
cri-o/cri-o#6145

In order to have a backportable change, we fix our utility code
to deal with incorrect padding.

Signed-off-by: Francesco Romani <fromani@redhat.com>

Signed-off-by: Francesco Romani <fromani@redhat.com>
IlyaTyomkin pushed a commit to IlyaTyomkin/cluster-node-tuning-operator that referenced this pull request Jun 13, 2023
* test: perfprof: utils: make sure to unquote cpus

Make sure to unquote the cpumask output to prevent false negatives

Signed-off-by: Francesco Romani <fromani@redhat.com>

* perfprof: utils: robustness fixes

Add testcases and log enhancement emerged during the local testing.

Signed-off-by: Francesco Romani <fromani@redhat.com>

* perfprof: tuned: disable the irqbalance plugin

The tuned irqbalance plugin clears the irqbalance banned CPUs list
when tuned starts. The list is then managed dynamically by the runtime
handlers.

On node restart, the tuned pod can be started AFTER the workload pods
(kubelet nor kubernetes offers ordering guarantees when recovering the
node state); clearing the banned CPUs list while pods are running and
compromising the IRQ isolation guarantees. Same holds true if the
NTO pod restarts for whatever reason.

To prevent this disruption, we disable the irqbalance plugin entirely.
Another component in the stack must now clear the irqbalance cpu ban
list once per node reboot.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=2105123
Signed-off-by: Francesco Romani <fromani@redhat.com>

* e2e: perfprof: add tests for cpu ban list handling

Add a test to verify that a tuned restart will not clear
the irqbalance cpu ban list, which is the key reason
why we disabled the irqbalance tuned plugin earlier.

Note there is no guarantee that any component in the stack
will reset the irqbalance copu ban list once.

crio inconditionally does a restore depending on a snapshot
taken the first time the server runs, which is likely
but not guaranteed to be correct.

There's no way to declare or check the content of the value
crio would reset.

Signed-off-by: Francesco Romani <fromani@redhat.com>

* perfprof: functests: utils: fix cpu mask padding

Testing the irqbalance handling on tuned restart highlighted
a crio bug when handling odd-numbered cpu affinity masks.
We expect this bug to have little impact on production environments
because it's unlikely they will have a number of CPUs multiple by 4
but not multiple by 8, but still we filed
cri-o/cri-o#6145

In order to have a backportable change, we fix our utility code
to deal with incorrect padding.

Signed-off-by: Francesco Romani <fromani@redhat.com>

Signed-off-by: Francesco Romani <fromani@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. bugzilla/severity-high Referenced Bugzilla bug's severity is high for the branch this PR is targeting. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants