Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubelet watches necessary secrets/configmaps instead of periodic polling #64752

Merged

Conversation

wojtek-t
Copy link
Member

@wojtek-t wojtek-t commented Jun 5, 2018

No description provided.

@wojtek-t wojtek-t added this to the v1.12 milestone Jun 5, 2018
@k8s-ci-robot
Copy link
Contributor

@wojtek-t: Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. approved Indicates a PR has been approved by an approver from all required OWNERS files. labels Jun 5, 2018
@wojtek-t wojtek-t added release-note-none Denotes a PR that doesn't merit a release note. and removed do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. labels Jun 5, 2018
@wojtek-t wojtek-t force-pushed the default_to_watching_managers branch from 255ed93 to 75826ea Compare June 5, 2018 11:08
@k8s-ci-robot k8s-ci-robot added area/kubeadm sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. labels Jun 5, 2018
@wojtek-t
Copy link
Member Author

wojtek-t commented Jun 5, 2018

/test pull-kubernetes-integration

// ManagerMode is a mode in which internal managers (secret, configmap) are running.
// Default: "Watching"
// +optional
ManagerMode KubeletManagerMode `json:"managerMode,omitempty"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The name ManagerMode seems too generic. Can we make it more specific for ConfigMap & Secrets?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, is there any value to configure secrets and configmaps differently? I'm guessing not...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed to ConfigMapAndSecretManagerMode - if you have any better suggestion, I can change one more time.

Regarding configuring differently - I consciously merge those two - I think we don't want to diverge them.
Let me know if you disagree with that.

// changed to objects that are in its interest.
WatchingManagerMode KubeletManagerMode = "Watching"
)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/cc @mtaufen FYI

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks :). Comments below:

  1. Is it necessary to expose this as a config? How risky do we think the change is? We already watch Pods, dynamic Kubelet config watches ConfigMaps, so watch is not unprecedented in the Kubelet.
  2. If so, I think we can potentially improve the naming and structure here. Naming is hard :). See below.

With respect to "manager", I really like the advice in https://blog.codinghorror.com/i-shall-call-it-somethingmanager/

Mode is more general than it has to be, specifically this is a strategy for observing resource updates.

ResourceNotifyStrategy was my first guess of what to call the strategy type, but this is still a bit vague, IMO, because it is not clear who is being notified (is it the resource, or the Kubelet?), or why there is a notification.

ResourceChangedEventStrategy might be a bit better, since ResourceChangedEvent makes it clear what's going on. But Event is overloaded for another Kubernetes concept, so I don't like this either.

ResourceChangeDetectionStrategy manages to avoid overloading Event. This is my best guess so far, IMO. Still feels verbose, maybe there's a way to simplify it.

For a field name, maybe ConfigMapAndSecretChangeDetectionStrategy. It's a bit verbose, so if there's a standard term that clearly captures ConfigMaps and Secrets, and whatever other objects this field will cover in the future, consider using that. But be wary of making it too general.

Other things:
Should we allow the strategies for ConfigMaps and Secrets to be configured independently?

I think we can simply use the infinitive of the verb. Instead of Caching and Watching, we can just say Cache and Watch. Simple is a bit vague, would it be more accurate to call this strategy Poll?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, structure-wise, is there a broader category for resource-consumption-lifecycles in the Kubelet, and should we invent a config substructure for that?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like: ResourceChangeDetectionStrategy - changed.

Regarding specific configmap and secrets:

  • I don't have any idea for a common term for those two - so I left ConfigMapAndSecretChangeDetectionStrategy
  • I don't want to split them, because we want to avoid diverging those two in my opinion

About specific infinitive verbs - yeah, that sounds fine. Also, I decided for "Fetch" instead of "Poll" - it's more accurate I think.

About the structure - I don't think we will have more about that, so introducing substructure doesn't make sense now in my opinion.


metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/util/clock"
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/apimachinery/pkg/watch"

clientset "k8s.io/client-go/kubernetes"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Merge this with the previous imports?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/util/clock"
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/apimachinery/pkg/watch"

clientset "k8s.io/client-go/kubernetes"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Merge this with the previous import.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@@ -113,6 +113,7 @@ kubeletConfiguration:
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
managerMode: Watching
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since "watching" is the default, why do you need to set it explicitly everywhere?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those are some kubeadm test data - it seems we are explicitly listing the whole config in mutliple places.
I didn't dig deeper into those kubeadm files, but the problem seems orthogonal to this PR.

@k8s-ci-robot
Copy link
Contributor

@yujuhong: GitHub didn't allow me to request PR reviews from the following users: FYI.

Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs.

In response to this:

/cc @mtaufen FYI

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@wojtek-t wojtek-t force-pushed the default_to_watching_managers branch from 75826ea to e55dcbb Compare June 18, 2018 07:46
Copy link
Member Author

@wojtek-t wojtek-t left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yujuhong - thanks a lot for review.

PTAL

@@ -113,6 +113,7 @@ kubeletConfiguration:
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
managerMode: Watching
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those are some kubeadm test data - it seems we are explicitly listing the whole config in mutliple places.
I didn't dig deeper into those kubeadm files, but the problem seems orthogonal to this PR.

// ManagerMode is a mode in which internal managers (secret, configmap) are running.
// Default: "Watching"
// +optional
ManagerMode KubeletManagerMode `json:"managerMode,omitempty"`
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed to ConfigMapAndSecretManagerMode - if you have any better suggestion, I can change one more time.

Regarding configuring differently - I consciously merge those two - I think we don't want to diverge them.
Let me know if you disagree with that.


metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/util/clock"
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/apimachinery/pkg/watch"

clientset "k8s.io/client-go/kubernetes"
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/util/clock"
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/apimachinery/pkg/watch"

clientset "k8s.io/client-go/kubernetes"
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@wojtek-t wojtek-t force-pushed the default_to_watching_managers branch 2 times, most recently from ec37421 to a1fc044 Compare June 19, 2018 07:35
Copy link
Member Author

@wojtek-t wojtek-t left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PTAL

// changed to objects that are in its interest.
WatchingManagerMode KubeletManagerMode = "Watching"
)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like: ResourceChangeDetectionStrategy - changed.

Regarding specific configmap and secrets:

  • I don't have any idea for a common term for those two - so I left ConfigMapAndSecretChangeDetectionStrategy
  • I don't want to split them, because we want to avoid diverging those two in my opinion

About specific infinitive verbs - yeah, that sounds fine. Also, I decided for "Fetch" instead of "Poll" - it's more accurate I think.

About the structure - I don't think we will have more about that, so introducing substructure doesn't make sense now in my opinion.

@@ -86,6 +86,7 @@ kubeletConfiguration:
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
configMapAndSecretChangeDetectionStrategy: Watche
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/Watche/Watch
(and other places)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

const (
// FetchChangeDetectionStrategy is a mode in which kubelet fetches
// necessary objects directly from apiserver.
FetchChangeDetectionStrategy ResourceChangeDetectionStrategy = "Fetch"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Had another thought on this - would it be more standard to call it Get, given that this strategy is just sending a GET request to the API server?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense - done.

@wojtek-t wojtek-t force-pushed the default_to_watching_managers branch from a1fc044 to 72a0f4d Compare June 19, 2018 20:14
@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jun 19, 2018
Copy link
Contributor

@yujuhong yujuhong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@@ -96,6 +96,7 @@ KubeletConfiguration:
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
configMapAndSecretChangeDetectionStrategy: Watch
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/cc @luxas

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the heads up!

@k8s-ci-robot k8s-ci-robot requested a review from luxas June 19, 2018 21:47
@mtaufen
Copy link
Contributor

mtaufen commented Jun 19, 2018

/lgtm

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: mtaufen, wojtek-t, yujuhong

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@fejta-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel comment for consistent failures.

@liggitt
Copy link
Member

liggitt commented Jun 21, 2018

@wojtek-t how did this affect things on the large scale tests?

@k8s-github-robot
Copy link

[MILESTONENOTIFIER] Milestone Pull Request Labels Incomplete

@derekwaynecarr @mtaufen @wojtek-t @yujuhong

Action required: This pull request requires label changes. If the required changes are not made within 2 days, the pull request will be moved out of the v1.12 milestone.

kind: Must specify exactly one of kind/bug, kind/cleanup or kind/feature.
priority: Must specify exactly one of priority/critical-urgent, priority/important-longterm or priority/important-soon.

Help

@wojtek-t
Copy link
Member Author

@wojtek-t how did this affect things on the large scale tests?

@liggitt When I was running 5k-node tests this visibly reduced cpu-load on apiserver (though drop on apicall latencies wasn't significant). My goal of merging it early in the cycle is to have many more runs over the whole cycle so that in case of problems we will fix them (or worst case disable it). But my two experiments didn't show any.

@k8s-github-robot
Copy link

Automatic merge from submit-queue (batch tested with PRs 65187, 65206, 65223, 64752, 65238). If you want to cherry-pick this change to another branch, please follow the instructions here.

@narjesshmida
Copy link

narjesshmida commented Mar 21, 2023

Hi, i am in situation : some pods need to detect a changed configmap, and every deploy i am obliged to redeploy some pods to get the new values of configmap... i know that i can do the rolling and restart ... but i want to put this
feature : configMapAndSecretChangeDetectionStrategy resourceChangeDetectionStrategy
because even when i change some secrets i want that all the pods detect the changes.
Any help please

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/kubeadm cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. milestone/incomplete-labels release-note-none Denotes a PR that doesn't merit a release note. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

10 participants