Permalink
Browse files

Update files in the community repo to point to multicluster rather th…

…an federation.
  • Loading branch information...
perotinus committed Oct 11, 2017
1 parent 5bd42a9 commit f8e1cbd0920f4a181759664095d80775e4e672c0
Showing with 48 additions and 34 deletions.
  1. +1 −1 contributors/design-proposals/architecture/architecture.md
  2. +1 −1 contributors/design-proposals/dir_struct.txt
  3. 0 contributors/design-proposals/{federation → multicluster}/control-plane-resilience.md
  4. 0 contributors/design-proposals/{federation → multicluster}/federated-api-servers.md
  5. 0 contributors/design-proposals/{federation → multicluster}/federated-ingress.md
  6. +3 −3 contributors/design-proposals/{federation → multicluster}/federated-placement-policy.md
  7. 0 contributors/design-proposals/{federation → multicluster}/federated-replicasets.md
  8. 0 contributors/design-proposals/{federation → multicluster}/federated-services.md
  9. 0 contributors/design-proposals/{federation → multicluster}/federation-clusterselector.md
  10. BIN contributors/design-proposals/{federation → multicluster}/federation-high-level-arch.png
  11. 0 contributors/design-proposals/{federation → multicluster}/federation-lite.md
  12. 0 contributors/design-proposals/{federation → multicluster}/federation-phase-1.md
  13. 0 contributors/design-proposals/{federation → multicluster}/federation.md
  14. BIN contributors/design-proposals/{federation → multicluster}/ubernetes-cluster-state.png
  15. BIN contributors/design-proposals/{federation → multicluster}/ubernetes-design.png
  16. BIN contributors/design-proposals/{federation → multicluster}/ubernetes-scheduling.png
  17. +1 −1 contributors/design-proposals/scheduling/podaffinity.md
  18. +1 −1 contributors/devel/release/issues.md
  19. +1 −1 sig-list.md
  20. +22 −9 {sig-federation → sig-multicluster}/ONCALL.md
  21. 0 {sig-federation → sig-multicluster}/OWNERS
  22. +5 −5 {sig-federation → sig-multicluster}/README.md
  23. +13 −12 sigs.yaml
@@ -245,7 +245,7 @@ itself:
A single Kubernetes cluster may span multiple availability zones.
However, for the highest availability, we recommend using [cluster federation](../federation/federation.md).
However, for the highest availability, we recommend using [cluster federation](../multicluster/federation.md).
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/architecture.md?pixel)]()
@@ -134,7 +134,7 @@ Uncategorized
security.md
security_context.md
service_accounts.md
./federation
./multicluster
control-plane-resilience.md
federated-api-servers.md
federated-ingress.md
@@ -28,7 +28,7 @@ A simple example of a placement policy is
> compliance.
The [Kubernetes Cluster
Federation](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/federation/federation.md#policy-engine-and-migrationreplication-controllers)
Federation](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/multicluster/federation.md#policy-engine-and-migrationreplication-controllers)
design proposal includes a pluggable policy engine component that decides how
applications/resources are placed across federated clusters.
@@ -283,7 +283,7 @@ When the remediator component (in the sidecar) receives the notification it
sends a PATCH request to the federation-apiserver to update the affected
resource. This way, the actual rebalancing of ReplicaSets is still handled by
the [Rescheduling
Algorithm](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/federation/federated-replicasets.md)
Algorithm](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/multicluster/federated-replicasets.md)
in the Federated ReplicaSet controller.
The remediator component must be deployed with a kubeconfig for the
@@ -368,4 +368,4 @@ engine could implement.
## Future Work
- This proposal uses ConfigMaps to store and manage policies. In the future, we
want to introduce a first-class **Policy** API resource.
want to introduce a first-class **Policy** API resource.
@@ -313,7 +313,7 @@ scheduler to not put more than one pod from S in the same zone, and thus by
definition it will not put more than one pod from S on the same node, assuming
each node is in one zone. This rule is more useful as PreferredDuringScheduling
anti-affinity, e.g. one might expect it to be common in
[Cluster Federation](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/federation/federation.md) clusters.)
[Cluster Federation](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/multicluster/federation.md) clusters.)
* **Don't co-locate pods of this service with pods from service "evilService"**:
`{LabelSelector: selector that matches evilService's pods, TopologyKey: "node"}`
@@ -32,7 +32,7 @@ The SIG owner label defines the SIG to which the bot will escalate if the issue
or updated by the deadline. If there are no updates after escalation, the
issue may automatically removed from the milestone.
e.g. `sig/node`, `sig/federation`, `sig/apps`, `sig/network`
e.g. `sig/node`, `sig/multicluster`, `sig/apps`, `sig/network`
**Note:**
- For test-infrastructure issues use `sig/testing`.
View
@@ -35,8 +35,8 @@ When the need arises, a [new SIG can be created](sig-creation-procedure.md)
|[Cluster Ops](sig-cluster-ops/README.md)|cluster-ops|* [Rob Hirschfeld](https://github.com/zehicle), RackN<br>* [Jaice Singer DuMars](https://github.com/jdumars), Microsoft<br>|* [Slack](https://kubernetes.slack.com/messages/sig-cluster-ops)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-ops)|* [Thursdays at 20:00 UTC (biweekly)](https://zoom.us/j/297937771)<br>
|[Contributor Experience](sig-contributor-experience/README.md)|contributor-experience|* [Garrett Rodrigues](https://github.com/grodrigues3), Google<br>* [Elsie Phillips](https://github.com/Phillels), CoreOS<br>|* [Slack](https://kubernetes.slack.com/messages/sig-contribex)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-contribex)|* [Wednesdays at 16:30 UTC (biweekly)](https://zoom.us/j/7658488911)<br>
|[Docs](sig-docs/README.md)|docs|* [Devin Donnelly](https://github.com/devin-donnelly), Google<br>* [Jared Bhatti](https://github.com/jaredbhatti), Google<br>|* [Slack](https://kubernetes.slack.com/messages/sig-docs)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)|* [Tuesdays at 17:30 UTC (weekly)](https://zoom.us/j/678394311)<br>
|[Federation](sig-federation/README.md)|multicluster|* [Christian Bell](https://github.com/csbell), Google<br>* [Quinton Hoole](https://github.com/quinton-hoole), Huawei<br>|* [Slack](https://kubernetes.slack.com/messages/sig-federation)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-federation)|* [Tuesdays at 16:30 UTC (biweekly)](https://plus.google.com/hangouts/_/google.com/k8s-federation)<br>
|[Instrumentation](sig-instrumentation/README.md)|instrumentation|* [Piotr Szczesniak](https://github.com/piosz), Google<br>* [Fabian Reinartz](https://github.com/fabxc), CoreOS<br>|* [Slack](https://kubernetes.slack.com/messages/sig-instrumentation)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-instrumentation)|* [Thursdays at 16:30 UTC (weekly)](https://zoom.us/j/5342565819)<br>
|[Multicluster](sig-multicluster/README.md)|multicluster|* [Christian Bell](https://github.com/csbell), Google<br>* [Quinton Hoole](https://github.com/quinton-hoole), Huawei<br>|* [Slack](https://kubernetes.slack.com/messages/sig-multicluster)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-multicluster)|* [Tuesdays at 16:30 UTC (biweekly)](https://plus.google.com/hangouts/_/google.com/k8s-mc)<br>
|[Network](sig-network/README.md)|network|* [Tim Hockin](https://github.com/thockin), Google<br>* [Dan Williams](https://github.com/dcbw), Red Hat<br>* [Casey Davenport](https://github.com/caseydavenport), Tigera<br>|* [Slack](https://kubernetes.slack.com/messages/sig-network)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-network)|* [Thursdays at 21:00 UTC (biweekly)](https://zoom.us/j/5806599998)<br>
|[Node](sig-node/README.md)|node|* [Dawn Chen](https://github.com/dchen1107), Google<br>* [Derek Carr](https://github.com/derekwaynecarr), Red Hat<br>|* [Slack](https://kubernetes.slack.com/messages/sig-node)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-node)|* [Tuesdays at 17:00 UTC (weekly)](https://plus.google.com/hangouts/_/google.com/sig-node-meetup?authuser=0)<br>
|[On Premise](sig-on-premise/README.md)|onprem|* [Marco Ceppi](https://github.com/marcoceppi), Canonical<br>* [Dalton Hubble](https://github.com/dghubble), CoreOS<br>|* [Slack](https://kubernetes.slack.com/messages/sig-onprem)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-on-prem)|* [Wednesdays at 16:00 UTC (weekly)](https://zoom.us/my/k8s.sig.onprem)<br>
@@ -1,25 +1,37 @@
# Overview
We have an oncall rotation in the SIG. The role description is as follows:
We have an oncall rotation for Federation in the SIG. The role description is as
follows:
* Ensure that the testgrid (https://k8s-testgrid.appspot.com/sig-federation) is green. This person will be the point of contact if testgrid turns red. Will identify the problem and fix it (most common scenarios: find culprit PR and revert it or free quota by deleting leaked resources).
Will also report most common failure scenarios and suggest improvements. Its up to the sig or individuals to prioritize and take up those tasks.
* Ensure that the testgrid (https://k8s-testgrid.appspot.com/sig-multicluster)
is green. This person will be the point of contact if testgrid turns red.
Will identify the problem and fix it (most common scenarios: find culprit PR
and revert it or free quota by deleting leaked resources). Will also report
most common failure scenarios and suggest improvements. Its up to the sig or
individuals to prioritize and take up those tasks.
Oncall playbook: https://github.com/kubernetes/community/blob/master/contributors/devel/on-call-federation-build-cop.md
Oncall playbook:
https://github.com/kubernetes/community/blob/master/contributors/devel/on-call-federation-build-cop.md
# Joining the rotation
Add your name at the end of the current rotation schedule if you want to join the rotation.
Anyone is free to join as long as they can perform the expected work described above. No special permissions are required but familiarity with existing codebase is recommended.
Add your name at the end of the current rotation schedule if you want to join
the rotation. Anyone is free to join as long as they can perform the expected
work described above. No special permissions are required but familiarity with
existing codebase is recommended.
# Swapping the rotation
If anyone is away on their oncall week (vacation, illness, etc), they are responsible for finding someone to swap with (by sending a PR, approved by that person). Swapping one week for another is usually relatively uncontentious.
If anyone is away on their oncall week (vacation, illness, etc), they are
responsible for finding someone to swap with (by sending a PR, approved by that
person). Swapping one week for another is usually relatively uncontentious.
# Extending the rotation schedule
Anyone can extend the existing schedule by assigning upcoming weeks to people in the same order as the existing schedule. cc the rotation members on the PR so that they know.
Please extend the schedule unless there are atlease 2 people assigned after you.
Anyone can extend the existing schedule by assigning upcoming weeks to people in
the same order as the existing schedule. cc the rotation members on the PR so
that they know. Please extend the schedule unless there are atlease 2 people
assigned after you.
# Current Oncall schedule
@@ -35,6 +47,7 @@ Please extend the schedule unless there are atlease 2 people assigned after you.
```
# Past 5 rotation cycles
```
(Adding Irfan)
7 August - 13 August: Nikhil Jindal (https://github.com/nikhiljindal)
File renamed without changes.
@@ -6,12 +6,12 @@ sigs.yaml file in the project root.
To understand how this file is generated, see generator/README.md.
-->
# Federation SIG
# Multicluster SIG
Covers the Federation of Kubernetes Clusters and related topics. This includes: application resiliency against availability zone outages; hybrid clouds; spanning of multiple could providers; application migration from private to public clouds (and vice versa); and other similar subjects.
Covers multi-cluster Kubernetes use cases and tooling. This includes: application resiliency against availability zone outages; hybrid clouds; spanning of multiple could providers; application migration from private to public clouds (and vice versa); and other similar subjects. This SIG was formerly called sig-federation and focused on the Federation project, but expanded its charter to all multi-cluster concerns in August 2017.
## Meetings
* [Tuesdays at 16:30 UTC](https://plus.google.com/hangouts/_/google.com/k8s-federation) (biweekly). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=16:30&tz=UTC).
* [Tuesdays at 16:30 UTC](https://plus.google.com/hangouts/_/google.com/k8s-mc) (biweekly). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=16:30&tz=UTC).
Meeting notes and Agenda can be found [here](https://docs.google.com/document/d/18mk62nOXE_MCSSnb4yJD_8UadtzJrYyJxFwbrgabHe8/edit).
Meeting recordings can be found [here](https://www.youtube.com/watch?v=iWKC3FsNHWg&list=PL69nYSiGNLP0HqgyqTby6HlDEz7i1mb0-).
@@ -21,8 +21,8 @@ Meeting recordings can be found [here](https://www.youtube.com/watch?v=iWKC3FsNH
* Quinton Hoole (**[@quinton-hoole](https://github.com/quinton-hoole)**), Huawei
## Contact
* [Slack](https://kubernetes.slack.com/messages/sig-federation)
* [Mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-federation)
* [Slack](https://kubernetes.slack.com/messages/sig-multicluster)
* [Mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-multicluster)
<!-- BEGIN CUSTOM CONTENT -->
View
@@ -334,14 +334,15 @@ sigs:
contact:
slack: sig-docs
mailing_list: https://groups.google.com/forum/#!forum/kubernetes-sig-docs
- name: Federation
dir: sig-federation
mission_statement: >
Covers the Federation of Kubernetes Clusters and related
topics. This includes: application resiliency against availability zone
outages; hybrid clouds; spanning of multiple could providers; application
migration from private to public clouds (and vice versa); and other
similar subjects.
- name: Multicluster
dir: sig-multicluster
mission_statement: >
Covers multi-cluster Kubernetes use cases and tooling. This includes:
application resiliency against availability zone outages; hybrid clouds;
spanning of multiple could providers; application migration from private
to public clouds (and vice versa); and other similar subjects. This SIG
was formerly called sig-federation and focused on the Federation project,
but expanded its charter to all multi-cluster concerns in August 2017.
label: multicluster
leads:
- name: Christian Bell
@@ -354,12 +355,12 @@ sigs:
- day: Tuesday
utc: "16:30"
frequency: biweekly
meeting_url: https://plus.google.com/hangouts/_/google.com/k8s-federation
meeting_url: https://plus.google.com/hangouts/_/google.com/k8s-mc
meeting_archive_url: https://docs.google.com/document/d/18mk62nOXE_MCSSnb4yJD_8UadtzJrYyJxFwbrgabHe8/edit
meeting_recordings_url: https://www.youtube.com/watch?v=iWKC3FsNHWg&list=PL69nYSiGNLP0HqgyqTby6HlDEz7i1mb0-
contact:
slack: sig-federation
mailing_list: https://groups.google.com/forum/#!forum/kubernetes-sig-federation
slack: sig-multicluster
mailing_list: https://groups.google.com/forum/#!forum/kubernetes-sig-multicluster
- name: Instrumentation
dir: sig-instrumentation
mission_statement: >
@@ -830,4 +831,4 @@ workinggroups:
meeting_archive_url: hhttps://docs.google.com/document/d/1Pxc-qwAt4FvuISZ_Ib5KdUwlynFkGueuzPx5Je_lbGM/edit
contact:
slack: wg-app-def
mailing_list: https://groups.google.com/forum/#!forum/kubernetes-wg-app-def
mailing_list: https://groups.google.com/forum/#!forum/kubernetes-wg-app-def

0 comments on commit f8e1cbd

Please sign in to comment.