-
Notifications
You must be signed in to change notification settings - Fork 405
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Modular architecture for component reconcilers and kyma CLI #13759
Comments
I like the concept of the separate Cluster/Kyma CRDs. We need to come up with a way to "tie" both of them, right now we are missing a link between Runtime and Cluster creation. Would the kyma-operator handle that, KEB, or should we introduce another component? I would say that it's the component's responsibility to determine when to act. If we go with kyma-operator based workflow then in the end we will end up with one component that has to know everything about the whole setup sequence of the Runtime. In my opinion, this will not really differ from the declarative-imperative mix we have right now. Also, what is your opinion on external integrations, such as Compass registration? Should we treat them as just regular Components, represented by their own CRD? |
Let's think about requirements, what we expect from high level (do not care if we are talking using k8s API and CRDs or GraphQL or REST API). Do we want KEB is doing one call for creating Kyma Runtime (with all necessary things or not). If yes, I can imagine "Runtime" CRD, which is the root. Then we have a runtime-operator, which creates proper "cluster" and "kyma" resources. I can also imagine the third one - "compass".
Let's think how it looks, when the root operator does not care about dependencies. It creates all resource at the same time: "cluster", ("compass" if necessary) and "kyma". Then Kyma-Operator watches if "compass" and "cluster" are ready. If yes, then starts creating "HelmComponent", "IstioComponent", "ClusterEssentials" etc. There is another way - KEB is creating "cluster", "compass" then waits. When ready, creates "kyma" resource. The question is where we expect the orchestration to be done - KEB or a separate component? Where we should implement the "if" statement, which decides if we are registering the runtime in the Compass or not. |
@piotrmiskiewicz I was thinking about having another CRD on top (Runtime) but then we have 3 levels of operators. The question is what would be in the spec of the runtime CRD. Lets take 2 use cases:
In KEB I expect 2 separate plans for these 2 use cases with completely different input parameters. First plan has all infrastructure details, second has just kubeconfig. If you introduce Runtime CRD it has to contain infrastructure details, kubeconfig (one of them is mandatory) and list of modules. I think it would be better to create Kyma CR and one of: Cluster CR or Kubeconfig secret from KEB. You can create them in parallel (no need to wait). |
The "tie" would be done by reference to the kubeconfig secret (name). KEB would create both resources Cluster and Kyma that would refer to the same kubeconfig name. For the BYOC model KEB would create Kyma and kubeconfig secret directly.
Kyma operator was not meant to manage sequence. It is more meta-operator. Kyma-operator will be responsible for installing CRD for selected components and creating Component CR for list of selected modules. The logic can be generic and based on the configuration provided for each kyma version.
Yes. Compass integration is just another module (added to the picture). |
@pbochynski regarding external systems: Do these systems lie behind a VPN or what is the reason they cannot be reached from the customer cluster? Otherwise also a Proxy would be totally sufficient to reach them. Not saying we can't centralize these components, just to make sure we don't artificially limit ourselves here |
@pbochynski I would like to better understand the sentence: "Component reconcilers should handle their dependencies " Consider a component operator, e.g: Ory. Should it check for pre-requisites like: "Is istio installed already?", "Is there a certificate in the cluster already?" If the answer is "yes", then I think we'll end up with a bunch of operators that have embedded knowledge about most of the runtime environment, with just minor differences between them. Of course they will install different things, but their dependencies will be similar and the components themselves will have to know a lot about their environment. Considering that, I vote for the model, where the Kyma-operator is the entity that has the knowledge about top-level dependencies (if any) and is a single source of truth for that. Component reconcilers should only focus on "technical" dependencies, like, for example, ability to create objects (RBAC), ability to access necessary remote services (networking) etc, without knowing which component is actually providing such services to them. |
We do not have too many dependencies now and we aim to have even fewer dependencies. Right now we have just istio and certificates as prerequisite. And we should not treat any dependency as a hard dependency. If we don't have certificate it doesn't mean that api-gateway controller cannot be installed. Most of the controllers do not have even dependency to istio (and should be excluded from istio-mesh if they only communicate with api-server). I would not demonize the dependency check. In ory you need istio just to create virtual service. So the only thing to do is to handle the error correctly (if there is no such resource as istio virtual service) then return error from reconciliation. Kubernetes will try again with default backoff strategy or you can decide when to try again (requeueAfter). That's it. Your controller has to handle such situation even if dependency management will be implemented in Kyma operator because someone can delete istio in the cluster after it was installed. We need to code controllers and reconcilers with resilience and eventual consistency in mind. |
I have 3 use cases right now:
More probably will come when we get external contributions. |
This issue has been automatically marked as stale due to the lack of recent activity. It will soon be closed if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed due to the lack of recent activity. /lifecycle rotten |
Discussion continued in kyma-project/community#666 |
This issue has been automatically marked as stale due to the lack of recent activity. It will soon be closed if no further activity occurs. Thank you for your contributions. |
The modular architecture is described here and is ready for implementation. |
Description
Kyma architecture should support modularization. The first step in this direction was made with the initial implementation of kyma reconciler, but it is not sufficient.
Requirements
Reasons
Kyma provides Kubernetes building blocks. It should be easy to pick only those that are needed for the job and it should be easy to add new blocks to extend Kyma features. With the growing number of components, it is not possible to always install them all anymore.
Ideas
RequeueAfter
option to handle missing resources you are waiting for.API design
The API should be designed and validated against all use cases and requirements.
There are 2 top level Custom Resources:
Kyma resource does not depend on Cluster resource. The connection is indirect. Both resources reference the kubeconfig secret that is created by the provisioner. If a cluster already exists kubeconfig can be created directly and referenced by Kyma resource (no need to create Cluster resource at all). Kyma operator installs Custom Resource Definitions in the target cluster and creates component CRs referencing the same kubeconfig to start reconciliation of selected Kyma modules. If the kubeconfig reference is empty kyma operator and component reconcilers operate on the same cluster.
Decisions:
Open topics:
Links
The text was updated successfully, but these errors were encountered: