New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactoring to use profiles.tuned.openshift.io #34
Conversation
/hold |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some thoughts
Looking good for me, this rework makes the operand as dumb as possible, the new informer and new design will minimize API access and resolve the issue with slow propagation of tuned profiles. |
/lgtm |
This is a largish refactoring of the NTO's operand, openshift-tuned. Changes: - Do not pull any node/pod information from the API server. Use a shared informer for profiles.tuned.openshift.io objects which carry a precalculated tuned profile for the node. - Switch to klog, glog seems unmaintained and vendored-in libraries already depend on klog. - Added tuna and tuned-profiles-cpu-partitioning profiles - Redesign of the retry loop - Makefile/.gitignore cleanup - Added vendor dependencies - Bump the OCP base to 4.4, tuned patches maintenance
/hold cancel |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jmencak, sjug The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
This is a largish refactoring of the NTO's operand, openshift-tuned.
Changes:
a shared informer for profiles.tuned.openshift.io objects which
carry a precalculated tuned profile for the node.
already depend on klog.