Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support cluster and machine in multiple namespaces in clusterctl create #252

Closed
jessicaochen opened this issue May 30, 2018 · 16 comments · Fixed by #481 or #509
Closed

Support cluster and machine in multiple namespaces in clusterctl create #252

jessicaochen opened this issue May 30, 2018 · 16 comments · Fixed by #481 or #509
Labels
area/clusterctl Issues or PRs related to clusterctl

Comments

@jessicaochen
Copy link
Contributor

jessicaochen commented May 30, 2018

Support cluster and machine in multiple namespaces in clusterctl create as it currently assumes default namespace.

@spew
Copy link
Contributor

spew commented Jun 26, 2018

Note that since this was opened we added MachineSets and MachineDeployments which should also be supported.

@ashish-amarnath
Copy link
Contributor

I can work on this.
/assign

@k8s-ci-robot
Copy link
Contributor

@ashish-amarnath: GitHub didn't allow me to assign the following users: ashish-amarnath.

Note that only kubernetes-sigs members and repo collaborators can be assigned.
For more information please see the contributor guide

In response to this:

I can work on this.
/assign

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@ashish-amarnath
Copy link
Contributor

ashish-amarnath commented Aug 2, 2018

  1. Adding an option called OperatingNamespace to create_cluster, delete_cluster and validate_cluster with a default value pointing to the default namespace.
  2. Adding a parameter to the ClusterClient methods to take the namespace and use the supplied value when calling the client-go methods.

Need to clarify:

  1. Changes to the manifest templates, created as part of applyClusterAPIStack, which also point to the default namespace

@ashish-amarnath
Copy link
Contributor

IMO there isn't much value in allowing the creation of cluster objects in multiple namespaces. However, there is definitely value in running controllers in different namespaces and with them running with a service account that has access to the cluster objects in the single namespace where they get created.
E.g. let's say there is a namespace cluster-registry in the 'external-cluster' and controllers running in other namespaces. All controllers can then watch for cluster objects in the cluster-registry namespace and then filter, say based on labels, which object they want to reconcile.
@jessicaochen WDYT?

@jessicaochen
Copy link
Contributor Author

My understanding about what the community decided regarding namespaces and clusters is that cluster objects will be in different namespaces and there will only be one cluster object per namespace. Any machine objects in the same namespace as a cluster object belong to that cluster (this specific point about how machines link to clusters might change).

kubernetes-retired/kube-deploy#463

I think we should stick with the community-agreed model. Feel free to counter propose in the community meeting and get community consensus if you feel strongly that we should be keeping all cluster objects in one namespace.
@roberthbailey FYI

@roberthbailey
Copy link
Contributor

My understanding about what the community decided regarding namespaces and clusters is that cluster objects will be in different namespaces and there will only be one cluster object per namespace.

There were folks that wanted to put multiple clusters into a single namespace so that they could share things like credentials for a cloud provider. At that point it would be similar (conceptually) to having two GKE clusters in the same GCP project -- the namespace is like the project and you want the same access to developers to both clusters.

Any machine objects in the same namespace as a cluster object belong to that cluster

There is an open issue (and maybe PR) to make a tighter link to support the above use case.

... if you feel strongly that we should be keeping all cluster objects in one namespace.

I don't think anyone was advocating for having them all in a single namespace, but having the flexibility of having more than one per namespace.

@roberthbailey
Copy link
Contributor

Also see #41.

@roberthbailey
Copy link
Contributor

Issue #41 was discussed during the meeting on June 20th (notes). @mvladev had an AI to add some comments to the issue but it looks like they weren't extracted from the conversation and put into github (to be more easily found).

The summary is that we agreed to add an optional reference from Machine -> Cluster so that you could have multiple clusters in the same namespace and be able to identify which machines belong to which cluster.

@ashish-amarnath
Copy link
Contributor

ashish-amarnath commented Aug 10, 2018

https://github.com/kubernetes-sigs/cluster-api/blob/master/clusterctl/clusterdeployer/clusterclient.go
Currently, all the cluster-api objects are created in the default namespace. Which is kinda inline with my initial idea.

After gathering thoughts from other folks I have evaluated 3 approaches to solve this:

Approach 1: One namespace for all clusters objects:
Cluster deployer will create all cluster-api objects in one namespace
Pros:

  • One stop for all clusters and cluster assets- simplifies cluster management
  • Controllers can watch for objects in just one namespace and reconcile objects created in it.

Cons:

  • Will mandate the need for tags (or something similar) to associate cluster object with its assets
  • Selectively granting access to clusters becomes hard
  • Cluster cleanup will become more complex
  • Strong opinions about cluster naming: No name conflict between clusters

Approach 2: Namespace per cluster
Cluster deployer will create a namespace for every cluster.
Pros:

  • Selective access control easy to enforce
  • Cluster cleanup will be as simple as nuking the namespace that it was created in.
  • No strong opinion about cluster names: Will allow creating multiple clusters with the same name.

Cons:

  • Will allow creating multiple clusters with the same name: will require handling of name conflicts.
    E.g. Two clusters having name ‘foo’, do something to cluster foo will now have to be fully qualified
  • Cluster deployer will now have to create a namespace prior to creating other objects. More of an observation.
  • Cluster deployer api will have to change to return the newly created namespace’s name. This mapping will have to be persisted
  • Answering “List all my clusters” becomes complex. This will potentially become kubectl get clusters --all-namespaces -l<some-filter>
  • Controllers will have to watch for new namespaces being created and then reconcile objects created in that namespace and may be a breaking change for downstream.

Approach 3: Allow cluster deployer to accept the namespace where the cluster-api objects will need to be created
Cluster objects will be namespace scoped and the namespace will be part of the cluster spec.
Pros:

  • No strong opinions about where the objects need to be housed
  • Fields to associate cluster-api objects to a cluster can continue to remain optional
  • No change in cluster deployer api- the namespace can be a field in the cluster definition yaml. (Implementation detail)
    If none exists then the cluster-deployer can choose the default namespace. Semantics, similar to other k8s objects
  • Answering “List all my clusters” can now be like kubectl get clusters -n public-clusters —-> a namespace for all public clusters
  • Will allow selective granting of access to more than one clusters.

Cons:

  • Cluster deployer will now have to check for naming conflicts. New error path in the cluster deployer. More of an observation than a ‘con’
  • Need to maintain mapping of namespace to clusters
  • Controllers will have to watch multiple namespaces and will also have to handle discovering new namespaces to watch

Based on the above evaluation Approach 3 is better.

Feel free to correct me if I've gotten something wrong or I am missing anything.

@dlipovetsky
Copy link
Contributor

In the current Cluster API architecture, the common controller code must be changed to support the different approaches listed above. If a provider's use case is not supported, the provider must choose between merging its changes to the Cluster API, or forking. In light of that, I think it's important to keep potential use cases in mind.

For example, an enterprise could run a permanent external cluster with the Cluster API. It could give internal organizations broad permissions within different namespaces in that cluster. To support this use case, the Cluster API common controllers would have to reconcile multiple Cluster objects in the same namespace--and, as a consequence, be able to associate Machine objects to some Cluster object.

@davidewatson
Copy link
Contributor

@ashish-amarnath:

I wonder if we can split the backend work from the UX design. I think these are the pieces of the backend design:

FWIW, for our SSH we are assuming one cluster per namespace (no need for strong refs) and one controller per cluster (better isolation).

@ashish-amarnath
Copy link
Contributor

@davidewatson In the change that I am working on atm, for the cluster Create I use the namespace in the cluster definition yaml, using NamespceDefault if it is empty. This way there is no change in UX. However, for the Delete I think the UX change is inevitable.
So to be consistent I am thinking of making the UX similar for the create scenario as well.
WDYT?

@davidewatson
Copy link
Contributor

Fair, that's a good point.

@ashish-amarnath
Copy link
Contributor

/assign

@k8s-ci-robot
Copy link
Contributor

@ashish-amarnath: GitHub didn't allow me to assign the following users: ashish-amarnath.

Note that only kubernetes-sigs members and repo collaborators can be assigned.
For more information please see the contributor guide

In response to this:

/assign

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/clusterctl Issues or PRs related to clusterctl
Projects
None yet
8 participants