Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cluster Controller Should Update Cluster APIEndpoints #96

Closed
jessicaochen opened this issue May 11, 2018 · 9 comments
Closed

Cluster Controller Should Update Cluster APIEndpoints #96

jessicaochen opened this issue May 11, 2018 · 9 comments
Labels
priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@jessicaochen
Copy link
Contributor

Currently it is the responsibility of the deployer to update cluster APIEndpoints after the control plane is provisioned. This should really live in the cluster controller so that the logic works across updates and is consistent.

@mkjelland mkjelland self-assigned this May 17, 2018
@dlipovetsky
Copy link

I'm curious--how will the cluster controller know that APIEndpoints should be updated? Will it watch Machine objects?

@jessicaochen
Copy link
Contributor Author

jessicaochen commented Jul 11, 2018

For now, the controller can probably watch for the appearance of a master machine and use the IP.

More long term, the cluster controller (or some controller at the same level) should have a concept of control plane and create the appropriate machines to back said control plane. Since said controller created the needed stuff for the control plane, it should also be able to fetch the IP for the control plane.

Just my thinking on this. How exactly to do this is up to implementer.

@dlipovetsky
Copy link

Is there consensus on what the cluster controller is responsible for? (I have not seen a design for it, so my own impression is that there is no consensus.)

I thought it could be responsible for the infrastructure (other than compute) required by the cluster, e.g., networks, security groups, etc, and that other controllers (e.g. MachineDeployment) would be responsible for compute infrastructure + software provisioning.

(I agree that the cluster controller would have to watch for master machines in order to update the APIEndpoints.)

@detiber
Copy link
Member

detiber commented Aug 15, 2018

In the case of a HA cluster I would expect the cluster controller to manage the api server load balancer, which should probably be the advertised endpoint used. At least until there is better client tooling that allows for the use of multiple endpoints and endpoint discovery.

@roberthbailey roberthbailey transferred this issue from kubernetes-sigs/cluster-api Jan 10, 2019
@roberthbailey roberthbailey added this to the v1alpha1 milestone Jan 11, 2019
@roberthbailey roberthbailey added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Jan 11, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 28, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 28, 2019
@vincepri
Copy link
Member

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jun 18, 2019
@vincepri
Copy link
Member

This has been fixed on master, v1alpha2 as part of #143

/close

@k8s-ci-robot
Copy link
Contributor

@vincepri: Closing this issue.

In response to this:

This has been fixed on master, v1alpha2 as part of #143

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@vincepri vincepri modified the milestones: v1alpha1, v0.2 Sep 11, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests

8 participants