New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubelet watches necessary secrets/configmaps instead of periodic polling #64752
Kubelet watches necessary secrets/configmaps instead of periodic polling #64752
Conversation
@wojtek-t: Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
255ed93
to
75826ea
Compare
/test pull-kubernetes-integration |
// ManagerMode is a mode in which internal managers (secret, configmap) are running. | ||
// Default: "Watching" | ||
// +optional | ||
ManagerMode KubeletManagerMode `json:"managerMode,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The name ManagerMode
seems too generic. Can we make it more specific for ConfigMap & Secrets?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, is there any value to configure secrets and configmaps differently? I'm guessing not...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed to ConfigMapAndSecretManagerMode - if you have any better suggestion, I can change one more time.
Regarding configuring differently - I consciously merge those two - I think we don't want to diverge them.
Let me know if you disagree with that.
// changed to objects that are in its interest. | ||
WatchingManagerMode KubeletManagerMode = "Watching" | ||
) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/cc @mtaufen FYI
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks :). Comments below:
- Is it necessary to expose this as a config? How risky do we think the change is? We already watch Pods, dynamic Kubelet config watches ConfigMaps, so watch is not unprecedented in the Kubelet.
- If so, I think we can potentially improve the naming and structure here. Naming is hard :). See below.
With respect to "manager", I really like the advice in https://blog.codinghorror.com/i-shall-call-it-somethingmanager/
Mode
is more general than it has to be, specifically this is a strategy for observing resource updates.
ResourceNotifyStrategy
was my first guess of what to call the strategy type, but this is still a bit vague, IMO, because it is not clear who is being notified (is it the resource, or the Kubelet?), or why there is a notification.
ResourceChangedEventStrategy
might be a bit better, since ResourceChangedEvent
makes it clear what's going on. But Event
is overloaded for another Kubernetes concept, so I don't like this either.
ResourceChangeDetectionStrategy
manages to avoid overloading Event
. This is my best guess so far, IMO. Still feels verbose, maybe there's a way to simplify it.
For a field name, maybe ConfigMapAndSecretChangeDetectionStrategy
. It's a bit verbose, so if there's a standard term that clearly captures ConfigMaps and Secrets, and whatever other objects this field will cover in the future, consider using that. But be wary of making it too general.
Other things:
Should we allow the strategies for ConfigMaps and Secrets to be configured independently?
I think we can simply use the infinitive of the verb. Instead of Caching
and Watching
, we can just say Cache
and Watch
. Simple
is a bit vague, would it be more accurate to call this strategy Poll
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, structure-wise, is there a broader category for resource-consumption-lifecycles in the Kubelet, and should we invent a config substructure for that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like: ResourceChangeDetectionStrategy
- changed.
Regarding specific configmap and secrets:
- I don't have any idea for a common term for those two - so I left
ConfigMapAndSecretChangeDetectionStrategy
- I don't want to split them, because we want to avoid diverging those two in my opinion
About specific infinitive verbs - yeah, that sounds fine. Also, I decided for "Fetch" instead of "Poll" - it's more accurate I think.
About the structure - I don't think we will have more about that, so introducing substructure doesn't make sense now in my opinion.
pkg/kubelet/secret/secret_manager.go
Outdated
|
||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" | ||
"k8s.io/apimachinery/pkg/runtime" | ||
"k8s.io/apimachinery/pkg/util/clock" | ||
"k8s.io/apimachinery/pkg/util/sets" | ||
"k8s.io/apimachinery/pkg/watch" | ||
|
||
clientset "k8s.io/client-go/kubernetes" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Merge this with the previous imports?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
|
||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" | ||
"k8s.io/apimachinery/pkg/runtime" | ||
"k8s.io/apimachinery/pkg/util/clock" | ||
"k8s.io/apimachinery/pkg/util/sets" | ||
"k8s.io/apimachinery/pkg/watch" | ||
|
||
clientset "k8s.io/client-go/kubernetes" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Merge this with the previous import.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
@@ -113,6 +113,7 @@ kubeletConfiguration: | |||
kubeAPIBurst: 10 | |||
kubeAPIQPS: 5 | |||
makeIPTablesUtilChains: true | |||
managerMode: Watching |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since "watching" is the default, why do you need to set it explicitly everywhere?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Those are some kubeadm test data - it seems we are explicitly listing the whole config in mutliple places.
I didn't dig deeper into those kubeadm files, but the problem seems orthogonal to this PR.
@yujuhong: GitHub didn't allow me to request PR reviews from the following users: FYI. Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
75826ea
to
e55dcbb
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yujuhong - thanks a lot for review.
PTAL
@@ -113,6 +113,7 @@ kubeletConfiguration: | |||
kubeAPIBurst: 10 | |||
kubeAPIQPS: 5 | |||
makeIPTablesUtilChains: true | |||
managerMode: Watching |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Those are some kubeadm test data - it seems we are explicitly listing the whole config in mutliple places.
I didn't dig deeper into those kubeadm files, but the problem seems orthogonal to this PR.
// ManagerMode is a mode in which internal managers (secret, configmap) are running. | ||
// Default: "Watching" | ||
// +optional | ||
ManagerMode KubeletManagerMode `json:"managerMode,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed to ConfigMapAndSecretManagerMode - if you have any better suggestion, I can change one more time.
Regarding configuring differently - I consciously merge those two - I think we don't want to diverge them.
Let me know if you disagree with that.
|
||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" | ||
"k8s.io/apimachinery/pkg/runtime" | ||
"k8s.io/apimachinery/pkg/util/clock" | ||
"k8s.io/apimachinery/pkg/util/sets" | ||
"k8s.io/apimachinery/pkg/watch" | ||
|
||
clientset "k8s.io/client-go/kubernetes" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
pkg/kubelet/secret/secret_manager.go
Outdated
|
||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" | ||
"k8s.io/apimachinery/pkg/runtime" | ||
"k8s.io/apimachinery/pkg/util/clock" | ||
"k8s.io/apimachinery/pkg/util/sets" | ||
"k8s.io/apimachinery/pkg/watch" | ||
|
||
clientset "k8s.io/client-go/kubernetes" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
ec37421
to
a1fc044
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PTAL
// changed to objects that are in its interest. | ||
WatchingManagerMode KubeletManagerMode = "Watching" | ||
) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like: ResourceChangeDetectionStrategy
- changed.
Regarding specific configmap and secrets:
- I don't have any idea for a common term for those two - so I left
ConfigMapAndSecretChangeDetectionStrategy
- I don't want to split them, because we want to avoid diverging those two in my opinion
About specific infinitive verbs - yeah, that sounds fine. Also, I decided for "Fetch" instead of "Poll" - it's more accurate I think.
About the structure - I don't think we will have more about that, so introducing substructure doesn't make sense now in my opinion.
@@ -86,6 +86,7 @@ kubeletConfiguration: | |||
clusterDNS: | |||
- 10.96.0.10 | |||
clusterDomain: cluster.local | |||
configMapAndSecretChangeDetectionStrategy: Watche |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/Watche/Watch
(and other places)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
const ( | ||
// FetchChangeDetectionStrategy is a mode in which kubelet fetches | ||
// necessary objects directly from apiserver. | ||
FetchChangeDetectionStrategy ResourceChangeDetectionStrategy = "Fetch" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Had another thought on this - would it be more standard to call it Get
, given that this strategy is just sending a GET request to the API server?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense - done.
a1fc044
to
72a0f4d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
@@ -96,6 +96,7 @@ KubeletConfiguration: | |||
clusterDNS: | |||
- 10.96.0.10 | |||
clusterDomain: cluster.local | |||
configMapAndSecretChangeDetectionStrategy: Watch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/cc @luxas
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the heads up!
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: mtaufen, wojtek-t, yujuhong The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest Review the full test history for this PR. Silence the bot with an |
@wojtek-t how did this affect things on the large scale tests? |
[MILESTONENOTIFIER] Milestone Pull Request Labels Incomplete @derekwaynecarr @mtaufen @wojtek-t @yujuhong Action required: This pull request requires label changes. If the required changes are not made within 2 days, the pull request will be moved out of the v1.12 milestone. kind: Must specify exactly one of |
@liggitt When I was running 5k-node tests this visibly reduced cpu-load on apiserver (though drop on apicall latencies wasn't significant). My goal of merging it early in the cycle is to have many more runs over the whole cycle so that in case of problems we will fix them (or worst case disable it). But my two experiments didn't show any. |
Automatic merge from submit-queue (batch tested with PRs 65187, 65206, 65223, 64752, 65238). If you want to cherry-pick this change to another branch, please follow the instructions here. |
Hi, i am in situation : some pods need to detect a changed configmap, and every deploy i am obliged to redeploy some pods to get the new values of configmap... i know that i can do the rolling and restart ... but i want to put this |
No description provided.