Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move to a structured status for dynamic kubelet config #63314

Merged
merged 1 commit into from May 16, 2018

Conversation

@mtaufen
Copy link
Contributor

mtaufen commented Apr 30, 2018

This PR updates dynamic Kubelet config to use a structured status, rather than a node condition. This makes the status machine-readable, and thus more useful for config orchestration.

Fixes: #56896

The status of dynamic Kubelet config is now reported via Node.Status.Config, rather than the KubeletConfigOk node condition.

@mtaufen mtaufen force-pushed the mtaufen:dkcfg-structured-status branch 3 times, most recently from 3ac818a to 4460b42 May 10, 2018

@dashpole

This comment has been minimized.

Copy link
Contributor

dashpole commented May 10, 2018

/assign

@k8s-github-robot

This comment has been minimized.

Copy link
Contributor

k8s-github-robot commented May 10, 2018

[MILESTONENOTIFIER] Milestone Pull Request: Up-to-date for process

@dashpole @dchen1107 @liggitt @mtaufen

Pull Request Labels
  • sig/cluster-lifecycle sig/node: Pull Request will be escalated to these SIGs if needed.
  • priority/important-soon: Escalate to the pull request owners and SIG owner; move out of milestone after several unsuccessful escalation attempts.
  • kind/feature: New functionality.
Help
// taking further action. The node tries to make the Assigned config the Active config
// by downloading, checkpointing, validating, and attempting to use the referenced source.
// +optional
Assigned *NodeConfigSource

This comment has been minimized.

@dashpole

dashpole May 10, 2018

Contributor

From our discussion offline: I think having transition times, and heartbeat times on each of these, showing when they were last updated, and when it last changed would be useful. It would allow distinguishing between roll-backs to LKG and when the node config hasn't yet taken effect because if assigned has transitioned more recently than active, it hasn't taken effect, and if the opposite, it was rolled back. Even though we have well defined errors, this seems a better way to transmit this information. It can also help with correlating config changes with other, non-fatal behavior changes (e.g. lots of evictions start happening after enabling local storage). Or, for example if the kubelet wasn't updating LKG after 10 minutes, or if the kubelet was in an infinite loop during loading of config, and heartbeat hadn't been updated in a couple minutes, these would be obvious. I agree that having well defined objects in the status is helpful for this, but I think we should model this as a "typed condition", and keep the heartbeat and transition times for each of these Sources.

This comment has been minimized.

@mtaufen

mtaufen May 11, 2018

Author Contributor

distinguishing between roll-backs to LKG and when the node config hasn't yet taken effect

To clarify, this is the scenario where we're on LKG, then ConfigSource is updated by a user, then Assigned is updated by the Kubelet, but we haven't tried the new config yet, so the status looks inconsistent (and it's unclear if Error refers to the new Assigned or the previous Assigned)?

And the transition times would clarify this by allowing you to compare the transition time for Assigned with the transition time for Error?

It can also help with correlating config changes with other, non-fatal behavior changes

I wonder if some of these debugging scenarios aren't better covered by monitoring events or timeseries derived from metrics. We could send events at every status transition, rather than just when the Kubelet restarts to try a new config.

I want to be careful with heartbeats, as they do impact scalability (every update, including heartbeats, requires a full rewrite of the Node object in etcd). But I think transition times could provide some value.

if the kubelet was in an infinite loop during loading of config

I think you'd already get a NodeNotReady in this case.

This comment has been minimized.

@mtaufen

mtaufen May 12, 2018

Author Contributor

I spent the afternoon drawing up a state machine diagram, to help clarify what we should think about for status reporting: https://docs.google.com/drawings/d/1Xk_uiDFY0Y3pN6gualoy9wDPnja9BAuT-i5JYuAZ6wE/edit?usp=sharing

This comment has been minimized.

@mtaufen

mtaufen May 14, 2018

Author Contributor

I'm thinking that rather than having Assigned in the status be an early acknowledgement of ConfigSource, it might be clearer to make it a direct reflection of the fully explicit on-disk record of assigned config (like LKG is), and then ensure the error messages clearly differentiate between errors related to the Spec.ConfigSource vs errors related to the already-downloaded config payload.

In general, it's probably clearer if the status simply maps to the state machine.

This comment has been minimized.

@mtaufen

mtaufen May 14, 2018

Author Contributor

And if we do report timestamps, the simplest option might just be to report the modification times of the on-disk records for Assigned and LastKnownGood. (Active is a little trickier, since this is determined at Kubelet startup and only applies to the runtime; though the NodeReady heartbeat might be a decent proxy for whether Active is up to date or not).

This comment has been minimized.

@mtaufen

mtaufen May 14, 2018

Author Contributor

@dashpole and I had an offline meeting and decided to leave timestamps out for now, and potentially add them later if a controller can justify that it needs them to reason about the state of the world.

if err := utilfeature.DefaultFeatureGate.SetFromMap(kubeletConfig.FeatureGates); err != nil {
glog.Fatal(err)
// If we should just use our existing, local config, the controller will return a nil config
if dynamicKubeletConfig != nil {

This comment has been minimized.

@dashpole

dashpole May 10, 2018

Contributor

nit: i'm generally not a fan of giving nil an implicit meaning. I would rather have an explicit return value indicating if we should use local or remote.

This comment has been minimized.

@mtaufen

mtaufen May 11, 2018

Author Contributor

I'm not sure I agree in this case.
The controller bootstrap returns the dynamic config if it exists.
If there's no dynamic config to use it returns nil (nil => Nothing is a common idiom in Go).
In the case that there's no dynamic config to use, we just move on and use the local.

I think adding a fourth return value to tag the result is unnecessary, given the ubiquity of that idiom.

This comment has been minimized.

@dashpole

dashpole May 14, 2018

Contributor

sgtm

// taking further action. The node tries to make the Assigned config the Active config
// by downloading, checkpointing, validating, and attempting to use the referenced source.
// +optional
Assigned *NodeConfigSource

This comment has been minimized.

@mtaufen

mtaufen May 11, 2018

Author Contributor

TODO: Think through this case more:

  1. API server is upgraded to version w/ new source subtype, Kubelets not
  2. User sets new source type
  3. Kubelet sets Assigned in runtime status manager, intending to ack, but had unmarshaled a config with all nil subfields, as far as it could see. So the status sync will fail API server validation.
  4. Kubelet sees AllNilSubfieldsError when it tries to produce a config source a little after updating runtime status manager, and updates the status manager with this error. But since Assigned was already set to an invalid value in the status manager, all status sync attempts will fail API server validation until this is corrected.

This comment has been minimized.

@mtaufen

mtaufen May 14, 2018

Author Contributor

Now that we report the checkpointed intent in Assigned, rather than using it as an early ack, this isn't a concern. AllNilSubfieldsError would be encountered prior to checkpointing the record of Assigned config.

@mtaufen mtaufen force-pushed the mtaufen:dkcfg-structured-status branch 3 times, most recently from a56f36b to 2c60104 May 14, 2018


func (s *nodeConfigStatus) ClearSyncError() {
s.transact(func() {
s.syncError = ""

This comment has been minimized.

@dashpole

dashpole May 14, 2018

Contributor

how does this get set in the status? It looks like we ignore syncError unless len() > 0.

This comment has been minimized.

@mtaufen

mtaufen May 15, 2018

Author Contributor

When the status is constructed, SyncError dominates Error (e.g. Node.Status.Config.Error = nodeConfigStatus.syncError || nodeConfigStatus.status.Error ).

For example:

  1. Config fails to validate, you see a ValidateError.
  2. You update config source, but you get AllNilSubfieldsError; this is a syncError (overlay), but the Kubelet is still internally aware of the ValidateError.
  3. You revert config source, Kubelet knows it doesn't need a restart, so it just clears the syncError, and you see the ongoing ValidateError again.

This comment has been minimized.

@dashpole

dashpole May 15, 2018

Contributor

oh, duh, I get it.

// Nodes allow *all* fields, including status, to be set on create.

if !utilfeature.DefaultFeatureGate.Enabled(features.DynamicKubeletConfig) {

This comment has been minimized.

@dashpole

dashpole May 14, 2018

Contributor

since I am not familiar with the pgk/registry code-base, why is this required? Same question for addition in PrepareForUpdate.

This comment has been minimized.

@mtaufen

mtaufen May 15, 2018

Author Contributor

We strip alpha fields when the corresponding feature gate is turned off, so that they can't be written unless the feature gate is turned on. See similar code in func (nodeStrategy) PrepareForCreate/PrepareForUpdate, similarly see DropDisabledAlphaFields in pkg/api/pod/util.go.

This comment has been minimized.

@dashpole

dashpole May 15, 2018

Contributor

great, thanks

@dchen1107

This comment has been minimized.

Copy link
Member

dchen1107 commented May 15, 2018

I approve the pr but rely on dashpole@ for a throughout review.

/approve

@mtaufen mtaufen force-pushed the mtaufen:dkcfg-structured-status branch from 2c60104 to 4eb3059 May 15, 2018

// SetLastKnownGood sets the last-known-good source in the status
SetLastKnownGood(source *apiv1.NodeConfigSource)
// SetError sets the error associated with the status
SetError(err string)

This comment has been minimized.

@dashpole

dashpole May 15, 2018

Contributor

I would prefer having Error and SyncError functions behave similarly, and I prefer qualifying the error. I.E.
SetLoadError
ClearLoadError
SetSyncError
ClearSyncError
Or just Set...Error functions.

The fact that we store the Load error in the status, and override it with the sync error is an implementation detail. (You dont need to call it load error. It is just what I picked since it comes from loadConfig(assigned).)

This comment has been minimized.

@mtaufen

mtaufen May 15, 2018

Author Contributor

I could see renaming SetSyncError to something like SetTmpError. I don't think the status manager should assume anything about the context of the base error, which is why I just stuck with SetError.

This comment has been minimized.

@mtaufen

mtaufen May 15, 2018

Author Contributor

Updated naming, and simplified to just SetError and SetErrorOverride.

@mtaufen mtaufen force-pushed the mtaufen:dkcfg-structured-status branch from 4eb3059 to a0d7300 May 15, 2018

Move to a structured status for dynamic Kubelet config
Updates dynamic Kubelet config to use a structured status, rather than a
node condition. This makes the status machine-readable, and thus more
useful for config orchestration.

Fixes: #56896

@mtaufen mtaufen force-pushed the mtaufen:dkcfg-structured-status branch from a0d7300 to fcc1f8e May 15, 2018

@dashpole

This comment has been minimized.

Copy link
Contributor

dashpole commented May 15, 2018

/lgtm
good work

@k8s-ci-robot k8s-ci-robot added the lgtm label May 15, 2018

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented May 15, 2018

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: dashpole, dchen1107, mtaufen

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-github-robot

This comment has been minimized.

Copy link
Contributor

k8s-github-robot commented May 16, 2018

/test all [submit-queue is verifying that this PR is safe to merge]

@k8s-github-robot

This comment has been minimized.

Copy link
Contributor

k8s-github-robot commented May 16, 2018

Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here.

@k8s-github-robot k8s-github-robot merged commit 2fcac6a into kubernetes:master May 16, 2018

15 of 18 checks passed

Submit Queue Required Github CI test is not green: pull-kubernetes-e2e-gce
Details
pull-kubernetes-e2e-gce-100-performance Job triggered.
Details
pull-kubernetes-kubemark-e2e-gce-big Job triggered.
Details
cla/linuxfoundation mtaufen authorized
Details
pull-kubernetes-bazel-build Job succeeded.
Details
pull-kubernetes-bazel-test Job succeeded.
Details
pull-kubernetes-cross Skipped
pull-kubernetes-e2e-gce Job succeeded.
Details
pull-kubernetes-e2e-gce-device-plugin-gpu Job succeeded.
Details
pull-kubernetes-e2e-gke Skipped
pull-kubernetes-e2e-kops-aws Job succeeded.
Details
pull-kubernetes-integration Job succeeded.
Details
pull-kubernetes-kubemark-e2e-gce Job succeeded.
Details
pull-kubernetes-local-e2e Skipped
pull-kubernetes-local-e2e-containerized Job succeeded.
Details
pull-kubernetes-node-e2e Job succeeded.
Details
pull-kubernetes-typecheck Job succeeded.
Details
pull-kubernetes-verify Job succeeded.
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.