New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
removed the workerlatencyprofile status related code #3129
removed the workerlatencyprofile status related code #3129
Conversation
The context here is, I believe, that we will instead document how to view various objects in the MCO namespace to determine whether the profile has rolled out. I'm fine with this approach since we don't have to modify any MCO status reporting either |
/assign @harche @swghosh @rphillips |
@sairameshv can you please add the description "The worker latency profile status update..." from the PR description to the commit message for this? |
Sure |
Just to add to this, I have updated the code to generate the events incase of updates happening to the config node CR. |
/test e2e-aws |
/lgtm |
/hold |
/hold cancel |
/lgtm |
/retest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally lgtm, one question below (please also format the commit message)
@@ -242,7 +227,7 @@ func (ctrl *Controller) addNodeConfig(obj interface{}) { | |||
if nodeConfig.Name != ctrlcommon.ClusterNodeInstanceName { | |||
message := fmt.Sprintf("The node.config.openshift.io \"%v\" is invalid: metadata.name Invalid value: \"%v\" : must be \"%v\"", nodeConfig.Name, nodeConfig.Name, ctrlcommon.ClusterNodeInstanceName) | |||
glog.V(2).Infof(message) | |||
ctrl.updateNodeConfigDegradedStatus(nodeConfig, message, "UpdateProhibited") | |||
ctrl.eventRecorder.Eventf(nodeConfig, corev1.EventTypeNormal, "ActionProhibited", message) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For errors in the controller for nodes objects, other than events here, is there anywhere else this gets bubbled to?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ideal place for errors related to config node objects would have been the status filed of the node config object. But unfortunately, we don't have it anymore. We don't want to introduce any unnecessary coupling between Machine Config Pool
and Worker Latency Profiles
by adding any Worker Latency Profile
specific statuses there.
So we thought of just emitting an event here so that we can passively let the user know that they are attempting a prohibited transition (e.g. default
to low
or vice-versa) and it has been rejected. IMO, this is slightly better than asking the user to look into the logs of the machine config controller.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ack, thanks!
The workerlatencyprofile status update will be written to the corresponding operator such as KubeAPIServer Operator and the Kube Controller Manager Hence, it is not required to update the status of the config node CR A piece of code has been added to generate events whenever there is a config node update
/retest |
@sairameshv: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: harche, rphillips, sairameshv, swghosh, yuqi-zhang The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
- What I did
Removed the code related to the worker latency profile status updates from the config Node custom resource
- How to verify it
Update the config node CR with a relevant worker latency profile type and observe that there is no status update present in the config Node CR
- Description for the changelog
The worker latency profile status update will be written to the corresponding operator statuses such as KubeAPIServer Operator and the Kube Controller Manager status. Hence, it is not required to update the config Node CR status again.
Also, a piece of code has been added to generate events whenever there is an update happening on the config node object.