Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support high availability clusters #473

Closed
lavalamp opened this issue Jul 15, 2014 · 56 comments
Closed

Support high availability clusters #473

lavalamp opened this issue Jul 15, 2014 · 56 comments
Assignees
Labels
area/HA lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. milestone/removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle.

Comments

@lavalamp
Copy link
Member

Master components need to be replicated for high availability.

@dchen1107 dchen1107 reopened this Jul 15, 2014
@bgrant0607 bgrant0607 changed the title Make apiserver shardable Improve apiserver availability Jul 25, 2014
@bgrant0607
Copy link
Member

I changed the title. Sharding the kubelet communication may be useful at large scale, but unnecessary in order to improve availability. Ditto for moving kubelet communication to a separate service.

How does the apiserver recover its state after restarting? By reloading it from etcd? Presumably apiserver replicas could do the same.

The apiserver could be replicated and use etcd for master election. We could initially make just the elected master talk to kubelets. Do we propagate resource version sequence numbers all the way down to kubelet? We'd want a way to prevent wayward former masters from giving kubelets stale/wrong commands.

@lavalamp
Copy link
Member Author

Currently we have no need for master election, apiserver is stateless. I also don't think we're close to having availability problems, so we might as well take our time and solve the sharding problem-- there's no rush here.

kubelet reads objects directly from etcd, apiserver doesn't send kubelet instructions. The apiserver->kubelet path is about info gathering, not command sending.

@smarterclayton
Copy link
Contributor

I think the biggest gap today is that the operations queue aren't shared between masters behind a load balancer and clients assume they can fetch operations.

@lavalamp
Copy link
Member Author

lavalamp commented Aug 8, 2014

Good point. How would we fix that? Have a registry/operations/ in etcd that we add an entry to every time an apiserver admits a PUT or POST? Would there be an expectation that another apiserver might pick it up if it isn't eventually marked as completed?

@smarterclayton
Copy link
Contributor

We could encode something unique about each startup of the apiserver (generated UUID) and put that into etcd along with the reachable host (which I don't like as much, seems fragile). Then in the status response we could return the UUID of the process and the operation identifier and do a proxy to the server corresponding to it in etcd (assumes it's reachable). That reduces the potential cross server coupling to etcd (which is hard already) and you could say "you can't load balance across server that can't reach each other or can't talk to the same etcd".

@bgrant0607 bgrant0607 added the priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. label Dec 4, 2014
@pires
Copy link
Contributor

pires commented Jan 17, 2015

Using etcd to map existing apiservers seems very fragile indeed. You'd need for some sort of watchdog mechanism to remove entries related to dead/inaccessible servers. And you'd rely heavily on the availability of etcd - which you already do.

Could we think about another solution, like apiservers communicating with each other (p2p, like in a Cassandra or Hazelcast cluster) and have them gossip the state of the cluster between themselves? This would work as well for partitioning/replicating cluster data.

@smarterclayton
Copy link
Contributor

We never updated this, but the agreement at the Kube face to face is to nuke operations (no operation in the api server should be long running). The ip allocator is single writer at this point - it could be converted to a sharded key write / merge operation against etcd with work. The controllers and scheduler need to elect themselves by waiting on an etcd key in the short term, and in the long term compete for work and shard work on related queues. The scheduler is the first component that must be made ha if we want to run the scheduler on the cluster (we do) since it would need to schedule it's replacement. Every control loop is what I'm referring to as controller - we still have a few that are in the master code vs in controller manager.

@erictune
Copy link
Member

ip allocator could instead be a "finalizer"?

@erictune
Copy link
Member

finalizers ➡️ #3585

@smarterclayton
Copy link
Contributor

Yeah - clients would look at Service.Status.ServiceIP, and would say "service not ready" until it had an IP.

----- Original Message -----

ip allocator could instead be a "finalizer"?


Reply to this email directly or view it on GitHub:
#473 (comment)

@bgrant0607
Copy link
Member

Operations have been eliminated.

This is not in our 1.0 roadmap, but if someone is interested in helping, we'd happily guide them.

@bgrant0607 bgrant0607 changed the title Improve apiserver availability Support high availability clusters Feb 28, 2015
@timothysc
Copy link
Member

Are there any known limitations with baby stepping here? For example load-balancing the api-servers?

We're finding the api-server to be the bottleneck and can be cpu-bound even on steady state...

@smarterclayton
Copy link
Contributor

I have a set of changes for service ip allocation that would allow us to move that to a control loop. I think we could try everything else there.

Do you guys have profiles yet? My recent stuff was showing conversion and serialization dominating, but it's not at your workloads.

On Mar 10, 2015, at 3:22 PM, Timothy St. Clair notifications@github.com wrote:

Are there any known limitations with baby stepping here? For example load-balancing the api-servers?

We're finding the api-server to be the bottleneck and can be cpu-bound even on steady state...


Reply to this email directly or view it on GitHub.

@bgrant0607 bgrant0607 removed the priority/backlog Higher priority than priority/awaiting-more-evidence. label Feb 10, 2017
@roberthbailey
Copy link
Contributor

Re-setting assignees based on who is working on HA in kubeadm for 1.8.

@k8s-github-robot
Copy link

[MILESTONENOTIFIER] Milestone Removed

@lavalamp @luxas @timothysc @wojtek-t

Important: This issue was missing labels required for the v1.9 milestone for more than 3 days:

kind: Must specify exactly one of kind/bug, kind/cleanup or kind/feature.

Help

@k8s-github-robot k8s-github-robot removed this from the v1.9 milestone Oct 9, 2017
k8s-github-robot pushed a commit that referenced this issue Nov 14, 2017
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Use results of kube-controller-manager leader election in addon manager

**What this PR does / why we need it**:
This adds leader election-like mechanism to addon manager. Currently, in a multi-master setup, upgrading one master will trigger a fight between addon managers on different masters, each forcing its own versions of addons. This leads to pod unavailability until all masters are upgraded to new version.

To avoid implementing leader election in bash, results of leader election in kube-controller-manager are used. Long term, addon manager probably should be rewritten in a real prgramming language (probably Go), and then, real leader election should be implemented there.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
I don't think there was an issue for this specifically, but this PR is related to #473

**Special notes for your reviewer**:

**Release note**:
```release-note
Addon manager supports HA masters.
```
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 7, 2018
dims pushed a commit to dims/openstack-cloud-controller-manager that referenced this issue Jan 13, 2018
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Use results of kube-controller-manager leader election in addon manager

**What this PR does / why we need it**:
This adds leader election-like mechanism to addon manager. Currently, in a multi-master setup, upgrading one master will trigger a fight between addon managers on different masters, each forcing its own versions of addons. This leads to pod unavailability until all masters are upgraded to new version.

To avoid implementing leader election in bash, results of leader election in kube-controller-manager are used. Long term, addon manager probably should be rewritten in a real prgramming language (probably Go), and then, real leader election should be implemented there.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
I don't think there was an issue for this specifically, but this PR is related to kubernetes/kubernetes#473

**Special notes for your reviewer**:

**Release note**:
```release-note
Addon manager supports HA masters.
```
@timothysc
Copy link
Member

I'm going to move to close this, there are recipes for recipes today and this issue no longer tracks any of the details. Different repos have assorted parent-child issues tracking the state and there has been docs published to the main site as well.

dims pushed a commit to dims/openstack-cloud-controller-manager that referenced this issue Mar 7, 2018
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Use results of kube-controller-manager leader election in addon manager

**What this PR does / why we need it**:
This adds leader election-like mechanism to addon manager. Currently, in a multi-master setup, upgrading one master will trigger a fight between addon managers on different masters, each forcing its own versions of addons. This leads to pod unavailability until all masters are upgraded to new version.

To avoid implementing leader election in bash, results of leader election in kube-controller-manager are used. Long term, addon manager probably should be rewritten in a real prgramming language (probably Go), and then, real leader election should be implemented there.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
I don't think there was an issue for this specifically, but this PR is related to kubernetes/kubernetes#473

**Special notes for your reviewer**:

**Release note**:
```release-note
Addon manager supports HA masters.
```
dims pushed a commit to dims/openstack-cloud-controller-manager that referenced this issue Mar 7, 2018
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Use results of kube-controller-manager leader election in addon manager

**What this PR does / why we need it**:
This adds leader election-like mechanism to addon manager. Currently, in a multi-master setup, upgrading one master will trigger a fight between addon managers on different masters, each forcing its own versions of addons. This leads to pod unavailability until all masters are upgraded to new version.

To avoid implementing leader election in bash, results of leader election in kube-controller-manager are used. Long term, addon manager probably should be rewritten in a real prgramming language (probably Go), and then, real leader election should be implemented there.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
I don't think there was an issue for this specifically, but this PR is related to kubernetes/kubernetes#473

**Special notes for your reviewer**:

**Release note**:
```release-note
Addon manager supports HA masters.
```
dims pushed a commit to dims/utils that referenced this issue Oct 19, 2018
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Use results of kube-controller-manager leader election in addon manager

**What this PR does / why we need it**:
This adds leader election-like mechanism to addon manager. Currently, in a multi-master setup, upgrading one master will trigger a fight between addon managers on different masters, each forcing its own versions of addons. This leads to pod unavailability until all masters are upgraded to new version.

To avoid implementing leader election in bash, results of leader election in kube-controller-manager are used. Long term, addon manager probably should be rewritten in a real prgramming language (probably Go), and then, real leader election should be implemented there.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
I don't think there was an issue for this specifically, but this PR is related to kubernetes/kubernetes#473

**Special notes for your reviewer**:

**Release note**:
```release-note
Addon manager supports HA masters.
```
seans3 pushed a commit to seans3/kubernetes that referenced this issue Apr 10, 2019
wking pushed a commit to wking/kubernetes that referenced this issue Jul 21, 2020
b3atlesfan pushed a commit to b3atlesfan/kubernetes that referenced this issue Feb 5, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/HA lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. milestone/removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle.
Projects
None yet
Development

No branches or pull requests