-
Notifications
You must be signed in to change notification settings - Fork 39.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cloud node controller: implement with workqueues and node lister #94736
cloud node controller: implement with workqueues and node lister #94736
Conversation
Welcome @HaibaraAi96! |
Hi @HaibaraAi96. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign @andrewsykim |
// Finally, if no error occurs we Forget this item so it does not | ||
// get queued again until another change happens. | ||
cnc.workqueue.Forget(obj) | ||
klog.Infof("Successfully synced '%s'", key) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is probably too verbose for Infof
, maybe increase verbosity here:
klog.V(4).Infof("Successfully synced '%s'", key)
staging/src/k8s.io/cloud-provider/controllers/node/node_controller.go
Outdated
Show resolved
Hide resolved
// Start a loop to periodically update the node addresses obtained from the cloud | ||
wait.Until(func() { cnc.UpdateNodeStatus(context.TODO()) }, cnc.nodeStatusUpdateFrequency, stopCh) | ||
go wait.Until(cnc.runWorker, cnc.nodeStatusUpdateFrequency, stopCh) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we still want the periodic run of UpdateNodeStatus
based on cnc.nodeStatusUpdateFrequency
and syncHandler
should actually only call UpdateCloudNode
and not UpdateNodeStatus
. So this should look like this instead:
go wait.Until(func() { cnc.UpdateNodeStatus(context.TODO()) }, cnc.nodeStatusUpdateFrequency, stopCh)
go wait.Until(c.runWorker, time.Second, stopCh)
Thanks for the PR @HaibaraAi96! I think overall we need this PR to update cloud node controller to use workqueues so that failed syncs aren't dropped. I left some initial feedback. |
25bcd85
to
3646b54
Compare
/retest |
@HaibaraAi96: Cannot trigger testing until a trusted user reviews the PR and leaves an In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/ok-to-test |
3646b54
to
50bc5ab
Compare
/retest |
return nil | ||
} | ||
|
||
// Get the Node resource with this namespace/name |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Get the Node resource with this name" since nodes do not have namespaces.
// The Node resource may no longer exist, in which case we stop | ||
// processing. | ||
if apierrors.IsNotFound(err) { | ||
utilruntime.HandleError(fmt.Errorf("Node '%s' in work queue no longer exists", key)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this line is necessary since nodes not existing is not an error case here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure, will remove that
@@ -295,7 +432,8 @@ func (cnc *CloudNodeController) updateNodeAddress(ctx context.Context, node *v1. | |||
// in a retry-if-conflict loop. | |||
type nodeModifier func(*v1.Node) | |||
|
|||
func (cnc *CloudNodeController) UpdateCloudNode(ctx context.Context, _, newObj interface{}) { | |||
// UpdateCloudNode handles updating existing nodes registered with the cloud taint. | |||
func (cnc *CloudNodeController) UpdateCloudNode(ctx context.Context, newObj interface{}) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe just name this syncNode
now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also thinking that syncNode should receive the name of the node instead of newObj interface{}
and in initializeNode
we fetch the latest node from the informer cache line 478, instead of from apiserver.
50bc5ab
to
f18c219
Compare
staging/src/k8s.io/cloud-provider/controllers/node/node_controller_test.go
Show resolved
Hide resolved
039ebc3
to
4e6be1e
Compare
/retest |
staging/src/k8s.io/cloud-provider/controllers/node/node_controller.go
Outdated
Show resolved
Hide resolved
staging/src/k8s.io/cloud-provider/controllers/node/node_controller.go
Outdated
Show resolved
Hide resolved
@@ -347,60 +433,51 @@ func (cnc *CloudNodeController) initializeNode(ctx context.Context, node *v1.Nod | |||
}) | |||
if err != nil { | |||
utilruntime.HandleError(err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this can be removed as well
4e6be1e
to
0fc1834
Compare
0fc1834
to
914ff60
Compare
d0abb51
to
91c641b
Compare
if err != nil { | ||
return err | ||
var curNode *v1.Node | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: remove line here
1f587fb
to
d654864
Compare
d654864
to
9931871
Compare
/test pull-kubernetes-e2e-gce-ubuntu-containerd |
9931871
to
4ba09a8
Compare
/retest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
/lgtm
Thanks @HaibaraAi96!
/retest |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: andrewsykim, HaibaraAi96 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/milestone v1.20 |
if err != nil { | ||
return err | ||
var curNode *v1.Node | ||
if cnc.cloud.ProviderName() == "gce" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI @cheftako we should probably discuss getting rid of this custom check for GCE, I can't recall the initial reason behind the GCE specific check. Would probably be nice to have something generic here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1. Most of these checks are because the test needs something which wasn't generally available. Not sure about this particular test but I've written a few of these. Its frequently something like the webhook tests. You need to create a special test container and deploy it to your registry. The intention was always that other cloud providers would follow adding the container to their registry.
As title
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
Refactor the node controller to use Workqueue and Node Lister so that Informer watches for changes on the current state of Kubernetes objects and sends events to Workqueue where events are then popped up by worker(s) to process.
If we can use Workqueue in the implementation, it will be more readable and gain better maintenance.
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?:
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: