Skip to content
This repository has been archived by the owner on Apr 25, 2023. It is now read-only.

Component configuration for controller #114

Closed
pmorie opened this issue Jun 13, 2018 · 13 comments
Closed

Component configuration for controller #114

pmorie opened this issue Jun 13, 2018 · 13 comments
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@pmorie
Copy link
Contributor

pmorie commented Jun 13, 2018

Follow-on to #101; this is to add (where it makes sense) additional configuration parameters beyond basic feature gating. For example: the push reconciler is enabled/disabled by feature gate, but the resync duration for the push reconciler would be configured by a distinct component configuration API field mapped to a CLI flag.

@marun
Copy link
Contributor

marun commented Dec 3, 2018

The current set of common parameters required by controllers is captured in ControllerConfig struct. Sourcing this configuration from a configmap or other api resource would allow out-of-tree components to discover this configuration rather than having to duplicate it, and allow kubefed2 to discover the configuration of the cotnrol plane it was targeting (e.g. so that join would know whether it was joining a cluster to a namespaced or cluster-scoped control plane).

@xunpan
Copy link
Contributor

xunpan commented Mar 21, 2019

I think it can be organized as a new FederationConfig API after read https://docs.google.com/document/d/1arP4T9Qkp2SovlJZ_y790sBeiWXDO6SG10pZ_UUU-Lc/edit and based on shashidharatd's proposal.

kind: FederationConfig 
apiVersion: v1 
metadata:
  namespace: federation-system
  name: federation-v2
data:
  install-crds: "false"
  limited-scope: "false"
  registry-namespace: kube-multicluster-public
  controller-duration:
    available-delay: "20s"
    unavailable-delay: "60s"
    user-monitor-period: "40s"
  leader-elect:
    enable: "true"
    lease-duration: "15s"
    renew-deadline: "10s"
    retry-period: "5s"
    resource-lock: configmaps
  feature-gates:
    push-reconciler: true
    scheduler-preferences: true
    cross-cluster-service-discovery: true
    federated-ingress: true

The behavior should be:

  • This resource should be created after federation namespace is created.
  • This resource only take effect in controller start up. Controller will not monitor it and reconcile variations of the resource (at least in current implementation)
  • This configuration will override command line options. Command line options is kept for current users.

The advantages from my point of view is the validation and better extendibility.

@marun @shashidharatd @pmorie

@marun
Copy link
Contributor

marun commented Mar 21, 2019

@xunpan I have a couple of comments:

  • I don't think install-crds should be exposed. It's more a vestige of our kubebuilder heritage than anything, and I think its use should be deprecated.
  • I'm not sure feature-gates should be a hard-coded list. That would complicate a feature gaining sufficient maturity to be enabled without a feature gate, or a feature being deprecated and removed. Is there any prior art in the document to refer to? If not, consider using a list of objects with name/enabled properties, e.g.
  feature-gates:
   - name: push-reconciler
     enabled: true
  • I think the command-line options should be deprecated (and perhaps removed before beta) in favor of accepting the yaml of the configuration type (e.g. via --config=<config.yaml>). This strategy supports command-line users without imposing the burden of coordinating maintenance across the config type and the command line options.

@xunpan
Copy link
Contributor

xunpan commented Mar 21, 2019

@marun

  1. I agree to deprecate --install-crds. I never see it is used in deployment ...
  2. I did not see any best practise for feature-gates settings. Your proposal makes sense.
  3. FederationConfig should be a resource that we created for controller manager configuration. So, if we use --config, does that means controller manager will create/update the FederationConfig based on this yaml file? This seems not good to such changes. Our use case is not same as kubeconfig case.

I also have question:

  1. Do we still need to make registry-namespace independant? What about make it same as federation-system namespace?
    For namespaced federation, we already do it. I guess it is only used by federation v2 in our use case even cluster scoped. An independent registry-namespace is a little confused.
  2. what is our deprecate policy? Delete from code? Or just leave it as is ...
    I think a clean code is better if configuration is the right thing.

@marun
Copy link
Contributor

marun commented Mar 21, 2019

@xunpan For your 3.

  1. FederationConfig should be a resource that we created for controller manager configuration. So, if we use --config, does that means controller manager will create/update the FederationConfig based on this yaml file? This seems not good to such changes. Our use case is not same as kubeconfig case.

My thinking is that provision of yaml via -f would be to support running the controller manager outside of a kube cluster (e.g. for testing). As you say, the use cases within the cluster are both exposing configuration to agents other than the controller manager (e.g. kubefed2 discovering configuration) as well as configuring the controller manager.

  1. Do we still need to make registry-namespace independant? What about make it same as federation-system namespace?
    For namespaced federation, we already do it. I guess it is only used by federation v2 in our use case even cluster scoped. An independent registry-namespace is a little confused.

In a world with a single federation control plane and no other consumers of clusters, it might make sense to have clusters only in the same namespace. But if more than one federation control plane (namespaced or otherwise) exists in a cluster, it may make sense to share a namespace containing available clusters. And if there are consumers of clusters other than federation - and its been suggested that this is the case - there is an added need to support a configurable cluster namespace.

That said, I think it might make sense to default the registry namespace to the federation namespace rather than kube-multicluster-public, so long as any given deployment can choose to override that decision.

cc: @pmorie

  1. what is our deprecate policy? Delete from code? Or just leave it as is ...
    I think a clean code is better if configuration is the right thing.

As per recent breaking API changes, I think deprecation before we go beta is simply removal. That means we have a limited window left to do this kind of invasive work.

@xunpan
Copy link
Contributor

xunpan commented Mar 26, 2019

Follow up actions:

  • make installer script uses federationconfig for configuration
  • make helm chart uses federationconfig for configuration
    - [ ] make kubectl disable reject in namespaced federation.
    does not need anymore because we do not make nanespaced kubectl delete CRD.

@pmorie
Copy link
Contributor Author

pmorie commented Apr 2, 2019

A couple notes here while they're in my head:

I think one input to determining what the shape of the eventual config API is knowing the outcome of #688.

The right granularity for configuration APIs seems like one per functional area. That would yield:

  • workloads/sync
  • dns
  • replica scheduling

Having distinct APIs for different functional areas would make it easier to evolve and version config APIs independently, which is desireable.

@marun
Copy link
Contributor

marun commented Apr 3, 2019

The only options we set today are for all controllers - there are no options specific to the functional areas you mention.

@pmorie
Copy link
Contributor Author

pmorie commented Apr 9, 2019

A couple notes here while they're in my head:

I think one input to determining what the shape of the eventual config API is knowing the outcome of #688.

The right granularity for configuration APIs seems like one per functional area. That would yield:

  • workloads/sync
  • dns
  • replica scheduling

Having distinct APIs for different functional areas would make it easier to evolve and version config APIs independently, which is desireable.

@pmorie pmorie closed this as completed Apr 9, 2019
@pmorie pmorie reopened this Apr 9, 2019
@pmorie
Copy link
Contributor Author

pmorie commented Apr 9, 2019

Whoops - did not mean to close this.

@pmorie pmorie added this to the v1beta1 milestone Apr 9, 2019
@marun
Copy link
Contributor

marun commented Apr 10, 2019

Work is ongoing to implement support for FederationConfig via #729.

@marun marun added kind/feature Categorizes issue or PR as related to a new feature. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Apr 10, 2019
@xunpan
Copy link
Contributor

xunpan commented Apr 25, 2019

@pmorie @marun
#729 is closed and FederationConfig is merged. Is that OK to close this ticket?

@marun
Copy link
Contributor

marun commented Apr 25, 2019

Yes, I think this can be closed. Thank you!

@marun marun closed this as completed Apr 25, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests

3 participants