Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubelet: lookup node address for external provider if none is set #75229

Merged

Conversation

@andrewsykim
Copy link
Member

commented Mar 8, 2019

What type of PR is this?
/kind feature

What this PR does / why we need it:
This changes the node address lookup in the kubelet for external providers. If --cloud-provider=external and the kubelet has no node address set already, we attempt to fill its node address with what we can find at runtime (hostname + internal IP). In most cloud providers, whatever hostname/internal IP the kubelet sets at startup will get overwritten later.

This should alleviate some of the common bootstrapping problems we see with external providers because the kubelet hosting the control plane does not have node addresses set.

Which issue(s) this PR fixes:

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

Attempt to set the kubelet's hostname & internal IP if `--cloud-provider=external` and no node addresses exists
@andrewsykim

This comment has been minimized.

Copy link
Member Author

commented Mar 8, 2019

/sig node
/assign @mtaufen @mcrute @cheftako

@andrewsykim

This comment has been minimized.

Copy link
Member Author

commented Mar 8, 2019

/sig cloudprovider
/area cloudprovider

@andrewsykim

This comment has been minimized.

Copy link
Member Author

commented Mar 8, 2019

/priority important-longterm

@andrewsykim

This comment has been minimized.

Copy link
Member Author

commented Mar 8, 2019

@liggitt do you see this being a problem for kubelet serving certificates with node addresses? (#65594)

// then we return early because provider set addresses should take precedence.
// Otherwise, try to look up the node IP and let the cloud provider override it later
// This should alleviate a lot of the bootstrapping issues with out-of-tree providers
if externalCloudProvider && len(node.Status.Addresses) > 0 {

This comment has been minimized.

Copy link
@cheftako

cheftako Mar 11, 2019

Member

We were just in an externalCloudProvider if statement. Why not fold this into the end of that if statement?

This comment has been minimized.

Copy link
@andrewsykim

andrewsykim Mar 11, 2019

Author Member

good catch

This comment has been minimized.

Copy link
@andrewsykim

andrewsykim Mar 11, 2019

Author Member

fixed!

@andrewsykim andrewsykim force-pushed the andrewsykim:kubelet-node-address-external branch from 7473ed1 to 59eedac Mar 11, 2019

@liggitt

This comment has been minimized.

Copy link
Member

commented Mar 12, 2019

@liggitt do you see this being a problem for kubelet serving certificates with node addresses?

the only issue I can think of would be if the serving certificate approver process refuses to approve the self-detected addresses. in that case, the kubelet could be waiting for approval for the old SANs for a long time (manager#rotateCerts() looks like it waits up to 15 minutes before giving up and re-requesting)

@andrewsykim

This comment has been minimized.

Copy link
Member Author

commented Apr 19, 2019

@mtaufen can I get a review from you please?

@liggitt

This comment has been minimized.

Copy link
Member

commented Jul 9, 2019

/uncc

this still looks good to me, will leave to @kubernetes/sig-node-pr-reviews for ack

@derekwaynecarr

This comment has been minimized.

Copy link
Member

commented Jul 30, 2019

/assign

@derekwaynecarr

This comment has been minimized.

Copy link
Member

commented Jul 30, 2019

@sjenning can you take a look? we run a self-hosted control plane as well so would like your review.

@derekwaynecarr

This comment has been minimized.

Copy link
Member

commented Jul 30, 2019

is there a pointer i can reference for how external cloud provider with self hosted control planes are recommended for bootstrapping?

@andrewsykim

This comment has been minimized.

Copy link
Member Author

commented Jul 30, 2019

is there a pointer i can reference for how external cloud provider with self hosted control planes are recommended for bootstrapping?

I've only tried it with kubeadm, but as far as I can tell there isn't much on this because it doesn't work. The self-hosted components rely on the kubelet/node for the API server's advertise address which relies on the node object having an InternalIP or ExternalIP set. For internal cloud provider this works fine because it queries instance metadata for node addresses, for the external case it won't get node addresses until the external provider is running which it can't without the control plane up. cc @neolit123 @fabriziopandini

@andrewsykim

This comment has been minimized.

Copy link
Member Author

commented Jul 30, 2019

If you know the API server advertise address ahead of time I think you can work around it but it kind of defeats the purpose of self-hosting if you need to manually resync that value rather than relying on what Kubernetes says is the node's address.

@derekwaynecarr

This comment has been minimized.

Copy link
Member

commented Jul 30, 2019

/hold

we need to ensure that we have a clear bootstrapping flow identified before proceeding on this imo.

is there a kep that we can iterate or extend upon for this as part of overall external cloud provider transition?

@andrewsykim

This comment has been minimized.

Copy link
Member Author

commented Jul 30, 2019

we need to ensure that we have a clear bootstrapping flow identified

fwiw I think the bootstraping flow is well identified for the internal cloud provider case already, it just breaks for the external case because of it's assumptions around node addresses. Will let kubeadm maintainers comment on this one though.

is there a kep that we can iterate or extend upon for this as part of overall external cloud provider transition?

We have a KEP but it does not address self-hosting since it's still considered an alpha feature and wasn't part of the initial work on external providers.

@derekwaynecarr

This comment has been minimized.

Copy link
Member

commented Jul 30, 2019

@andrewsykim given that there are conformant k8s distributions that run self-hosted, i think we need to capture a recommendation on how those distributions should proceed. i am happy to help rally resources to work through this topic if its not yet defined, but i think we need to account for it as part of the transition.

@andrewsykim

This comment has been minimized.

Copy link
Member Author

commented Jul 30, 2019

@derekwaynecarr that's good to know, thank you! I'm curious if conformance actually covers the bootstrapping of self-hosted clusters also. I assume it only covers post-bootstrap which this PR shouldn't change - either way happy to dig into this if needed

@neolit123

This comment has been minimized.

Copy link
Member

commented Jul 30, 2019

hi @andrewsykim ,

This should alleviate some of the common bootstrapping problems we see with external providers because the kubelet hosting the control plane does not have node addresses set. kubeadm self-hosting is the first example that comes to mind.

if by self-hosting we mean running the kubeadm created control-plane as DaemonSets (and not the popular internet meaning which is running static-pods), then sadly this feature is pretty much unsupported at this point in kubeadm. it remained in alpha for a long time due to a set of major caveats:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/self-hosting/#caveats

reading back, the conclusion was that self-hosting and external CPs will not work in kubeadm:
ref: #60607 (comment)

I've only tried it with kubeadm, but as far as I can tell there isn't much on this because it doesn't work. The self-hosted components rely on the kubelet/node for the API server's advertise address which relies on the node object having an InternalIP or ExternalIP set.

given my comment above, perhaps kubeadm is not the right tool to base evaluations of this change upon.

For internal cloud provider this works fine because it queries instance metadata for node addresses, for the external case it won't get node addresses until the external provider is running which it can't without the control plane up

i might be lacking context on the external CP case.

so once the control plane is up the external CP can provide a node address at which point the control plane has to be restarted? does that mean that a new api server serving certificate has to be recreated to include a new advertise address?

I'm curious if conformance actually covers the bootstrapping of self-hosted clusters

kubeadm does not have e2e tests for self-hosting at this point. i remember seeing self-hosting tests in the suite, but i don't think these are part of conformance.

@derekwaynecarr

This comment has been minimized.

Copy link
Member

commented Jul 30, 2019

i am particularly concerned about using static pods with external cloud controller manager. is there a concern with that approach?

@andrewsykim

This comment has been minimized.

Copy link
Member Author

commented Jul 30, 2019

if by self-hosting we mean running the kubeadm created control-plane as DaemonSets (and not the popular internet meaning which is running static-pods), then sadly this feature is pretty much unsupported at this point in kubeadm.

Apologies if this wasn't clear - yes I was only referring to kubeadm self-hosting where the control plane is bootstrapped into a DaemonSet. Static Pods with external cloud providers works and is adopted.

@derekwaynecarr

This comment has been minimized.

Copy link
Member

commented Jul 30, 2019

@andrewsykim thanks for clarifying. is there a document/kep that describes how a static pod approach is enabled?

@andrewsykim

This comment has been minimized.

Copy link
Member Author

commented Jul 30, 2019

This PR was specifically open to address the kubeadm self-hosting case, but it also addresses general trouble shooting issues for the external CP case. More specifically, if a node fails to register with an external cloud provider, it's generally hard to trouble shoot pods running on cluster cause you can't fetch any logs without node addresses being set. With this change, node's have more functionality prior to registration with the cloud provider.

@andrewsykim

This comment has been minimized.

Copy link
Member Author

commented Jul 30, 2019

@andrewsykim thanks for clarifying. is there a document/kep that describes how a static pod approach is enabled?

It's not any different from how you would run static pods previously aside from changing some flags (mainly --cloud-provider=external) which we document here https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller. And the external add-on for the cloud provider should be documented by the cloud provider.

@derekwaynecarr

This comment has been minimized.

Copy link
Member

commented Jul 31, 2019

/hold cancel

apologies for initial confusion. this makes sense.

/approved
/lgtm

@k8s-ci-robot k8s-ci-robot added lgtm and removed do-not-merge/hold labels Jul 31, 2019

@derekwaynecarr

This comment has been minimized.

Copy link
Member

commented Aug 1, 2019

/approve

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

commented Aug 1, 2019

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: andrewsykim, derekwaynecarr

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@fejta-bot

This comment has been minimized.

Copy link

commented Aug 1, 2019

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@k8s-ci-robot k8s-ci-robot merged commit e8559f7 into kubernetes:master Aug 1, 2019

23 checks passed

cla/linuxfoundation andrewsykim authorized
Details
pull-kubernetes-bazel-build Job succeeded.
Details
pull-kubernetes-bazel-test Job succeeded.
Details
pull-kubernetes-conformance-image-test Skipped.
pull-kubernetes-cross Skipped.
pull-kubernetes-dependencies Job succeeded.
Details
pull-kubernetes-e2e-gce Job succeeded.
Details
pull-kubernetes-e2e-gce-100-performance Job succeeded.
Details
pull-kubernetes-e2e-gce-csi-serial Skipped.
pull-kubernetes-e2e-gce-device-plugin-gpu Job succeeded.
Details
pull-kubernetes-e2e-gce-iscsi Skipped.
pull-kubernetes-e2e-gce-iscsi-serial Skipped.
pull-kubernetes-e2e-gce-storage-slow Skipped.
pull-kubernetes-godeps Skipped.
pull-kubernetes-integration Job succeeded.
Details
pull-kubernetes-kubemark-e2e-gce-big Job succeeded.
Details
pull-kubernetes-local-e2e Skipped.
pull-kubernetes-node-e2e Job succeeded.
Details
pull-kubernetes-node-e2e-containerd Job succeeded.
Details
pull-kubernetes-typecheck Job succeeded.
Details
pull-kubernetes-verify Job succeeded.
Details
pull-publishing-bot-validate Skipped.
tide In merge pool.
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.