From 64b390aa8a24a2b86ffd279c245a92c4f1c05f8c Mon Sep 17 00:00:00 2001 From: Christoph Mewes Date: Wed, 10 Apr 2024 15:49:54 +0200 Subject: [PATCH] add KDP content --- content/kdp/_index.en.md | 56 ++ content/kdp/platform-operators/_index.en.md | 4 + .../monitoring/_index.en.md | 28 + content/kdp/platform-users/_index.en.md | 4 + .../consuming-services/_index.en.md | 99 ++ content/kdp/platform-users/rbac/_index.en.md | 83 ++ content/kdp/service-providers/_index.en.md | 4 + .../service-providers/crossplane/_index.en.md | 912 ++++++++++++++++++ .../publish-resources/_index.en.md | 496 ++++++++++ .../service-providers/servlet/_index.en.md | 170 ++++ content/kdp/tutorials/_index.en.md | 4 + .../tutorials/kcp-command-line/_index.en.md | 75 ++ .../tutorials/your-first-service/_index.en.md | 69 ++ 13 files changed, 2004 insertions(+) create mode 100644 content/kdp/_index.en.md create mode 100644 content/kdp/platform-operators/_index.en.md create mode 100644 content/kdp/platform-operators/monitoring/_index.en.md create mode 100644 content/kdp/platform-users/_index.en.md create mode 100644 content/kdp/platform-users/consuming-services/_index.en.md create mode 100644 content/kdp/platform-users/rbac/_index.en.md create mode 100644 content/kdp/service-providers/_index.en.md create mode 100644 content/kdp/service-providers/crossplane/_index.en.md create mode 100644 content/kdp/service-providers/publish-resources/_index.en.md create mode 100644 content/kdp/service-providers/servlet/_index.en.md create mode 100644 content/kdp/tutorials/_index.en.md create mode 100644 content/kdp/tutorials/kcp-command-line/_index.en.md create mode 100644 content/kdp/tutorials/your-first-service/_index.en.md diff --git a/content/kdp/_index.en.md b/content/kdp/_index.en.md new file mode 100644 index 000000000..2a4e2f083 --- /dev/null +++ b/content/kdp/_index.en.md @@ -0,0 +1,56 @@ ++++ +title = "Kubermatic Developer Platform" +sitemapexclude = true ++++ + +KDP (Kubermatic Developer Platform) is a new Kubermatic product in development that targets the IDP +(Internal Developer Platform) segment. This segment is part of a larger shift in the ecosystem to +"Platform Engineering", which champions the idea that DevOps in its effective form didn't quite work +out and that IT infrastructure needs new paradigms. The core idea of Platform Engineering is that +internal platforms provide higher-level services so that development teams no longer need to spend +time on operating components not core to their applications. These internal services are designed in +alignment with company policies and provide a customized framework for running applications and/or +their dependencies. + +KDP offers a central control plane for IDPs by providing an API backbone that allows to register (as +service provider) and consume (as platform user) **services**. KDP itself does **not** host the +actual workloads providing such services (e.g. if a database service is offered, the underlying +PostgreSQL pods are not hosted in KDP) and instead delegates this to so-called **service clusters**. +A component called [**servlet**]({{< relref "service-providers/servlet" >}}) is installed onto service +clusters which allows service providers (who own the service clusters) to publish APIs from their +service cluster onto KDP's central platform. + +KDP is based on [kcp](https://kcp.io), a CNCF Sandbox project to run many lightweight "logical" +clusters. Each of them acts as an independent Kubernetes API server to platform users and is called +a "Workspace". Workspaces are organized in a tree hierarchy, so there is a `root` workspace that has +child workspaces, and those can have child workspaces, and so on. In KDP, platform users own a certain +part of the workspace hierarchy (maybe just a single workspace, maybe a whole sub tree) and +self-manage those parts of the hierarchy that they own. This includes assigning permissions to +delegate certain tasks and subscribing to service APIs. Platform users can therefore "mix and match" +what APIs they want to have available in their workspaces to only consume the right services. + +KDP is an automation/DevOps/GitOps-friendly product and is "API-driven". Since it exposes +Kubernetes-style APIs it can be used with a lot of existing tooling (e.g. `kubectl` works to manage +resources). We have decided against an intermediate API (like we have in KKP) and the KDP Dashboard +directly interacts with the Kubernetes APIs exposed by kcp. As such everything available from the +Dashboard will be available from the API. A way for service providers to plug in custom dashboard +logic is planned, but not realized yet. + +Service APIs are not pre-defined by KDP, and as such are subject to API design in the actual +installation. Crossplane on the service cluster can be used to provide abstraction APIs that are then +reconciled to more complex resource bundles. The level of abstraction in an API is up to service +providers and will vary from setup to setup. + +## Personas + +KDP has several types of people that we identified as stakeholders in an Internal Developer Platform +based on KDP. Here is a brief overview: + +- **Platform Users** are the end users (often application developers or "DevOps engineers") in an + IDP. They consume services (e.g. they want a database or they have a container image that they want + to be started), own workspaces and self-organize within those workspaces. +- **Service Providers** offer services to developers. They register APIs that they want to provide on + the service "marketplace" and they operate service clusters and controllers/operators on those + service clusters that actually provide the services in question. +- **Platform Owners** are responsible for keeping KDP itself available and assign top-level + permissions so that developers and service providers can then utilize self-service capabilities. diff --git a/content/kdp/platform-operators/_index.en.md b/content/kdp/platform-operators/_index.en.md new file mode 100644 index 000000000..f84d493cf --- /dev/null +++ b/content/kdp/platform-operators/_index.en.md @@ -0,0 +1,4 @@ ++++ +title = "Platform Operators" +weight = 1 ++++ diff --git a/content/kdp/platform-operators/monitoring/_index.en.md b/content/kdp/platform-operators/monitoring/_index.en.md new file mode 100644 index 000000000..3c67f5f9f --- /dev/null +++ b/content/kdp/platform-operators/monitoring/_index.en.md @@ -0,0 +1,28 @@ ++++ +title = "Monitoring" +weight = 1 ++++ + +Monitoring for KDP is currently very basic. We deploy the +[kube-prometheus-stack](https://artifacthub.io/packages/helm/prometheus-community/kube-prometheus-stack) +Helm chart from the infra repository (see +[folder for deployment logic](https://github.com/kubermatic/infra/tree/main/clusters/platform/dev)), +but it basically only deploys prometheus-operator and Grafana. Default rules and dashboards are +omitted. + +## Accessing Grafana + +Grafana is currently not exposed. You will need to use port-forwarding to access it. + +```sh +$ kubectl -n monitoring port-forward svc/prometheus-grafana 8080:80 +``` + +Now it's accessible from [localhost:8080](http://localhost:8080). A datasource called "KDP" is added +to the list of datasources on Grafana, you want to use _that_ one. + +## Dashboards + +Currently, KDP ships the following dashboards: + +- **KDP / System / API Server**: Basic API server metrics for kcp. diff --git a/content/kdp/platform-users/_index.en.md b/content/kdp/platform-users/_index.en.md new file mode 100644 index 000000000..202773895 --- /dev/null +++ b/content/kdp/platform-users/_index.en.md @@ -0,0 +1,4 @@ ++++ +title = "Platform Users" +weight = 3 ++++ diff --git a/content/kdp/platform-users/consuming-services/_index.en.md b/content/kdp/platform-users/consuming-services/_index.en.md new file mode 100644 index 000000000..0c35b84de --- /dev/null +++ b/content/kdp/platform-users/consuming-services/_index.en.md @@ -0,0 +1,99 @@ ++++ +title = "Consuming Services" +weight = 1 ++++ + +This document describes how to use (consume) Services offered in KDP. + +## Background + +A "service" in KDP defines a unique Kubernetes API Group and offers a number of resources (types) to +use. A service could offer certificate management, databases, cloud infrastructure or any other set +of Kubernetes resources. + +Services are provided by service owners, who run their own Kubernetes clusters and take care of the +maintenance and scaling tasks for the workload provisioned by all users of the service(s) they +offer. + +A KDP Service should not be confused with a Kubernetes Service. Internally, a KDP Service is +ultimately translated into a kcp `APIExport` with a number of `APIResourceSchemas` (~ CRDs). + +## Browsing Services + +Login to the KDP Dashboard and choose your organization. Then select "Services" in the menu bar to +see a list of all available Services. This page also allows to create new services, which is +further described in [Your First Service]({{< relref "../../tutorials/your-first-service" >}}) for +service owners. + +Note that every Service shows: + +* its main title (the human-readable name of a Service, like "Certificate Management") +* its internal name (ultimately the name of the Kubernetes `Service` object you would need to + manually enable the service using `kubectl`) +* a short description + +## Enabling a Service + +Before a KPD Service can be used, it must be enabled in the workspace where it should be available. + +### Dashboard + +(TODO: currently the UI has no support for this.) + +### Manually + +Alternatively, create the `APIBinding` object yourself. This section assumes that you are familiar +with [kcp on the Command Line]({{< relref "../../tutorials/kcp-command-line" >}}) and have the kcp kubectl plugin installed. + +First you need to get the kubeconfig for accessing your kcp workspaces. Once you have set your +kubeconfig up, make sure you're in the correct namespace by using +`kubectl ws `. Using `kubectl ws .` if you're unsure where you're at. + +To enable a Service, use `kcp bind apiexport` and specify the path to and name of the `APIExport`. + +```bash +# kubectl kcp bind apiexport : +kubectl kcp bind apiexport root:my-org:my.fancy.api +``` + +Without the plugin, you can create an `APIBinding` manually, simple `kubectl apply` this: + +```yaml +apiVersion: apis.kcp.io/v1alpha1 +kind: APIBinding +metadata: + name: my.fancy.api +spec: + reference: + export: + name: my.fancy.api + path: root:my-org +``` + +Shortly after, the new API will be available in the workspace. Check via `kubectl api-resources`. +You can now create objects for types in that API group to your liking and they will be synced and +processed behind the scenes. + +Note that a Service often has related resources, often Secrets and ConfigMaps. You must explicitly +allow the Service to access these in your workspace and this means editing/patching the `APIBinding` +object (the kcp kubectl plugin currently has no support for managing permission claims). For each of +the claimed resources, you have to accept or reject them: + +```yaml +spec: + permissionClaims: + # Nearly all Services in KDP require access to namespaces, rejecting this will + # most likely break the Service, even more than rejecting any other claim. + - all: true + resources: namespaces + state: Accepted + - all: true + resources: secrets + state: Accepted # or Rejected +``` + +Rejecting a claim will severely impact a Service, if not even break it. Consult with the Service's +documentation or the service owner if rejecting a claim is supported. + +When you _change into_ (`kubctl ws …`) a different workspace, kubectl will inform you if there are +outstanding permission claims that you need to accept or reject. diff --git a/content/kdp/platform-users/rbac/_index.en.md b/content/kdp/platform-users/rbac/_index.en.md new file mode 100644 index 000000000..e384da612 --- /dev/null +++ b/content/kdp/platform-users/rbac/_index.en.md @@ -0,0 +1,83 @@ ++++ +title = "RBAC" +weight = 2 ++++ + +# RBAC in KDP + +Authorization (authZ) in KDP closely resembles +[Kubernetes RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) since KDP uses kcp +as its API control plane. Besides the "standard" RBAC of Kubernetes, kcp implements concepts specific +to its multi-workspace nature. See +[upstream documentation](https://docs.kcp.io/kcp/v0.22/concepts/authorization/) for them. + +## Cross-workspace RBAC propagation + +KDP implements controllers that allow propagation of `ClusterRoles` and `ClusterRoleBindings` to +children workspaces of the workspace that they are in. Be aware that existing resources with the same +names in the children workspaces will be overwritten. + +To sync a `ClusterRole` or `ClusterRoleBinding`, annotate it with `kdp.k8c.io/sync-to-workspaces="*"`. +In the future, the feature might allow to only sync to specific child workspaces, but for now it only +supports syncing to all "downstream" workspace. + +The default roles shipped with KDP are annotated like this to be provided in all workspaces. + +## Auto-generate Service ClusterRoles + +KDP comes with the `apibinding-clusterroles-controller`, which picks up `APIBindings` with the label +`rbac.kdp.k8c.io/create-default-clusterroles=true`. It generates two `ClusterRoles` called +`services::developer` and `services::viewer`, which give write and read permissions +respectively to all resources bound by the `APIBinding`. + +Both `ClusterRoles` are aggregated to the "Developer" and "Member" roles (if present). + +If the auto-generated rules are not desired because workspace owners want to assign more granular +permissions, the recommendation is to create `APIBindings` without the mentioned labels and instead +create `ClusterRole` objects in their workspaces. The `APIBinding` status can help in identifying +which resources are available (to add them to `ClusterRoles`): + +```yaml +status: + [...] + boundResources: + - group: certs-demo.k8c.io # <- API group + resource: certificates # <- resource name + schema: + UID: 758377e9-4442-4706-bdd7-365991863931 + identityHash: 7b6d5973370fb0e9104ac60b6bb5df81fc2b2320e77618a042c20281274d5a0a + name: vc517860e.certificates.certs-demo.k8c.io + storageVersions: + - v1alpha1 +``` + +Creating such `ClusterRoles` is a manual process and follows the exact same paradigms as normal +Kubernetes RBAC. Manually created roles can still use the aggregation labels (documented below) so +that their manual roles are aggregated to the "Developer" and "Member" meta-roles. + +## Well-Known Metadata + +### ClusterRoles + +#### Labels + +| Label | Value | Description | +| ---------------------------------------- | ---------- | -------------------------- | +| `rbac.kdp.k8c.io/display` | `"true"` | Make the `ClusterRole` available for assignment to users in the KDP dashboard. | +| `rbac.kdp.k8c.io/aggregate-to-member` | `"true"` | Aggregate this `ClusterRole` into the "Member" role, which is used for basic membership in a workspace (i.e. mostly read-only permissions). | +| `rbac.kdp.k8c.io/aggregate-to-developer` | `"true"` | Aggregate this `ClusterRole` into the "Developer" role, which is assigned to active contributors (creating and deleting objects). | + +#### Annotations + +| Annotation | Value | Description | +| ------------------------------ | ---------- | -------------------------- | +| `rbac.kdp.k8c.io/display-name` | String | Display name in the KDP dashboard. The dashboard falls back to the `ClusterRole` object name if this is not set. | +| `rbac.kdp.k8c.io/description` | String | Description shown as help in the KDP dashboard for this `ClusterRole`. | + +### APIBindings + +#### Labels + +| Label | Value | Description | +| --------------------------------------------- | -------- | -------------------------------------------------------------------------------------------- | +| `rbac.kdp.k8c.io/create-default-clusterroles` | `"true"` | Create default ClusterRoles (developer and viewer) for resources bound by this `APIBinding`. | diff --git a/content/kdp/service-providers/_index.en.md b/content/kdp/service-providers/_index.en.md new file mode 100644 index 000000000..103b22d1c --- /dev/null +++ b/content/kdp/service-providers/_index.en.md @@ -0,0 +1,4 @@ ++++ +title = "Service Providers" +weight = 2 ++++ diff --git a/content/kdp/service-providers/crossplane/_index.en.md b/content/kdp/service-providers/crossplane/_index.en.md new file mode 100644 index 000000000..5cbb65c89 --- /dev/null +++ b/content/kdp/service-providers/crossplane/_index.en.md @@ -0,0 +1,912 @@ ++++ +title = "Publishing Resources using Crossplane" +linkTitle = "Using Crossplane" +weight = 2 ++++ + +The guide describes the process of making a resource (usually defined by a CustomResourceDefinition) +of one Kubernetes cluster (the "service cluster" or "local cluster") available for use in the KDP +platform (the "platform cluster" or "KDP workspaces"). This involves setting up a KDP Service and +then installing the KDP Servlet and defining `PublishedResources` in the local cluster. + +All of the documentation and API types are worded and named from the perspective of a service owner, +the person(s) who own a service and want to make it available to consumers in the KDP platform. + +## High-level Overview + +A "service" in KDP comprises a set of resources within a single Kubernetes API group. It doesn't +need to be _all_ of the resources in that group, service owners are free and encouraged to only make +a subset of resources (i.e. a subset of CRDs) available for use in the platform. + +For each of the CRDs on the service cluster that should be published, the service owner creates a +`PublishedResource` object, which will contain both which CRD to publish, as well as numerous other +important settings that influence the behaviour around handling the CRD. + +When publishing a resource (CRD), exactly one version is published. All others are ignored from the +standpoint of the resource synchronization logic. + +All published resources together form the KDP Service. When a service is enabled in a workspace +(i.e. it is bound to it), users can manage objects for the projected resources described by the +published resources. These objects will be synced from the workspace onto the service cluster, +where they are meant to be processed in whatever way the service owners desire. Any possible +status information (in the `status` subresource) will in turn be synced back up into the workspace +where the user can inspect it. + +Additionally, a published resource can describe additional so-called "related resources". These +usually originate on the service cluster and could be for example connection detail secrets created +by Crossplane, but could also originate in the user workspace and just be additional, auxiliary +resources that need to be synced down to the service cluster. + +### `PublishedResource` + +In its simplest form (which is rarely practical) a `PublishedResource` looks like this: + +```yaml +apiVersion: services.kdp.k8c.io/v1alpha1 +kind: PublishedResource +metadata: + name: publish-certmanager-certs # name can be freely chosen +spec: + resource: + kind: Certificate + apiGroup: cert-manager.io + version: v1 +``` + +However, you will most likely apply more configuration and use features described below. + +### Filtering + +The Servlet can be instructed to only work on a subset of resources in the KDP platform. This +can be restricted by namespace and/or label selector. + +```yaml +apiVersion: services.kdp.k8c.io/v1alpha1 +kind: PublishedResource +metadata: + name: publish-certmanager-certs # name can be freely chosen +spec: + resource: ... + filter: + namespace: my-app + resource: + matchLabels: + foo: bar +``` + +### Schema + +**Warning:** The actual CRD schema is always copied verbatim. All projections +etc. have to take into account that the resource contents must be expressible without changes to the +schema. + +### Projection + +For stronger separation of concerns and to enable whitelabelling of services, the type meta for +can be projected, i.e. changed between the local service cluster and the KDP platform. You could +for example rename `Certificate` from cert-manager to `Zertifikat` inside the platform. + +Note that the API group of all published resources is always changed to the one defined in the +KDP `Service` object (meaning 1 Servlet serves all the published resources under the same API group). +That is why changing the API group cannot be configured in the projection. + +Besides renaming the Kind and Version, dependent fields like Plural, ShortNames and Categories +can be adjusted to fit the desired naming scheme in the platform. The Plural name is computed +automatically, but can be overridden. ShortNames and Categories are copied unless overwritten in the +`PublishedResource`. + +It is also possible to change the scope of resources, i.e. turning a namespaced resource into a +cluster-wide. This should be used carefully and might require extensive mutations. + +```yaml +apiVersion: services.kdp.k8c.io/v1alpha1 +kind: PublishedResource +metadata: + name: publish-certmanager-certs # name can be freely chosen +spec: + resource: ... + projection: + version: v1beta1 + kind: Zertifikat + plural: Zertifikate + shortNames: [zerts] + # categories: [management] + # scope: Namespaced # change only when you know what you're doing +``` + +Consumers (end users) in the platform would then ultimately see projected names only. Note that GVK +projection applies only to the synced object itself and has no effect on the contents of these +objects. To change the contents, use external solutions like Crossplane to transform objects. + + +### Naming + +Since the Servlet ingests resources from many different Kubernetes clusters (workspaces) and combines +them onto a single cluster, resources have to be renamed to prevent collisions and also follow the +conventions of whatever tooling ultimately processes the resources locally. + +The renaming is configured in `spec.naming`. In there, renaming patterns are configured, where +pre-defined placeholders can be used, for example `foo-$placeholder`. The following placeholders +are available: + +* `$remoteClusterName` – the KDP workspace's cluster name (e.g. "1084s8ceexsehjm2") +* `$remoteNamespace` – the original namespace used by the consumer inside the KDP workspace +* `$remoteNamespaceHash` – first 20 hex characters of the SHA-1 hash of `$remoteNamespace` +* `$remoteName` – the original name of the object inside the KDP workspace (rarely used to construct + local namespace names) +* `$remoteNameHash` – first 20 hex characters of the SHA-1 hash of `$remoteName` + +If nothing is configured, the default ensures that no collisions will happen: Each workspace in +the platform will create a namespace on the local cluster, with a combination of namespace and +name hashes used for the actual resource names. + +```yaml +apiVersion: services.kdp.k8c.io/v1alpha1 +kind: PublishedResource +metadata: + name: publish-certmanager-certs # name can be freely chosen +spec: + resource: ... + naming: + namespace: "$remoteClusterName" + name: "cert-$remoteNamespaceHash-$remoteNameHash" +``` + +### Related Resources + +The processing of resources on the service cluster often leads to additional resources being +created, like a `Secret` for each cert-manager `Certificate` or a connection detail secret created +by Crossplane. These need to be made available to the user in their workspaces. + +Likewise it's possible for auxiliary resources having to be created by the user, for example when +the user has to provide credentials. + +To handle these cases, a `PublishedResource` can define multiple "related resources". Each related +resource currently represents exactly one object to synchronize between user workspace and service +cluster (i.e. you cannot express "sync all Secrets"). While the main published resource sync is +always workspace->service cluster, related resources can originate on either side and so either can +work as the source of truth. + +At the moment, only `ConfigMaps` and `Secrets` are allowed related resource kinds. + +For each related resource, the servlet needs to be told the name/namespace. This is done by selecting +a field in the main resource (for a `Certificate` this would mean `spec.secretName`). Both name and +namespace need to be part of the main object (or be fixed values, like a hardcoded `kube-system` +namespace). + +The path expressions for name and namespace are evaluated against the main object on either side +to determine their values. So if you had a `Certificate` in your workspace with +`spec.secretName = "my-cert"` and after syncing it down, the copy on the service cluster has a +rewritten/mutated `spec.secretName = "jk23h4wz47329rz2r72r92-cert"` (e.g. to prevent naming +collisions), the expression `spec.secretName` would yield `"my-cert"` for the name in the workspace +and `"jk...."` as the name on the service cluster. Once the object exists with that name on the +originating side, the servlet will begin to sync it to the other side. + +```yaml +apiVersion: services.kdp.k8c.io/v1alpha1 +kind: PublishedResource +metadata: + name: publish-certmanager-certs +spec: + resource: + kind: Certificate + apiGroup: cert-manager.io + version: v1 + + naming: + # this is where our CA and Issuer live in this example + namespace: kube-system + # need to adjust it to prevent collions (normally clustername is the namespace) + name: "$remoteClusterName-$remoteNamespaceHash-$remoteNameHash" + + related: + - origin: service # service or platform + kind: Secret # for now, only "Secret" and "ConfigMap" are supported; + # there is no GVK projection for related resources + + # configure where in the parent object we can find + # the name/namespace of the related resource (the child) + reference: + name: + # This path is evaluated in both the local and remote objects, to figure out + # the local and remote names for the related object. This saves us from having + # to remember mutated fields before their mutation (similar to the last-known + # annotation). + path: spec.secretName + + # namespace part is optional; if not configured, + # servlet assumes the same namespace as the owning resource + # + # namespace: + # path: spec.secretName + # regex: + # pattern: '...' + # replacement: '...' + # + # to inject static values, select a meaningless string value + # and leave the pattern empty + # + # namespace: + # path: metadata.uid + # regex: + # replacement: kube-system +``` + +## Examples + +### Provide Certificates + +This combination of `Service` and `PublishedResource` make cert-manager certificates available in +kcp. The `Service` needs to be created in a workspace, most likely in an organization workspace. +The `PublishedResource` is created wherever the Servlet and cert-manager are running. + +```yaml +apiVersion: core.kdp.k8c.io/v1alpha1 +kind: Service +metadata: + name: certificate-management +spec: + apiGroup: certificates.example.corp + catalogMetadata: + title: Certificate Management + description: Acquire certificates signed by Example Corp's internal CA. +``` + +```yaml +apiVersion: services.kdp.k8c.io/v1alpha1 +kind: PublishedResource +metadata: + name: publish-certmanager-certs +spec: + resource: + kind: Certificate + apiGroup: cert-manager.io + version: v1 + + naming: + # this is where our CA and Issuer live in this example + namespace: kube-system + # need to adjust it to prevent collions (normally clustername is the namespace) + name: "$remoteClusterName-$remoteNamespaceHash-$remoteNameHash" + + related: + - origin: service # service or platform + kind: Secret # for now, only "Secret" and "ConfigMap" are supported; + # there is no GVK projection for related resources + + # configure where in the parent object we can find + # the name/namespace of the related resource (the child) + reference: + name: + # This path is evaluated in both the local and remote objects, to figure out + # the local and remote names for the related object. This saves us from having + # to remember mutated fields before their mutation (similar to the last-known + # annotation). + path: spec.secretName + # namespace part is optional; if not configured, + # servlet assumes the same namespace as the owning resource + # namespace: + # path: spec.secretName + # regex: + # pattern: '...' + # replacement: '...' +``` + +## Technical Details + +The following sections go into more details of the behind the scenes magic. + +### Synchronization + +Even though the whole configuration is written from the standpoint of the service owner, the actual +synchronization logic considers the platform side as the canonical source of truth. The Servlet +continuously tries to make the local objects look like the ones in the platform, while pushing +status updates back into the platform (if the given `PublishedResource` (i.e. CRD) has a `status` +subresource enabled). + +### Local <-> Remote Connection + +The Servlet tries to keep KDP-related metadata on the service cluster, away from the consumers. This +is both to prevent vandalism and to hide implementation details. + +To ensure stability against future changes, once KDP has determined how a local object should be +named, it will remember this decision in its metadata. This is so that on future reconciliations, +the (potentially costly, but probably not) renaming logic does not need to be applied again. This +allows the Servlet to change defaults and also allows the service owner to make changes to the +naming rules without breaking existing objects. + +Since we do not want to store metadata on the platform side, we instead rely on label selectors on +the local objects. Each local object has a label for the remote cluster name, namespace and object +name, and when trying to find the matching local object, the Servlet simply does a label-based +search. + +There is currently no sync-related metadata available on source objects, as this would either be +annotations (untyped strings...) or require schema changes to allow additional fields in basically +random CRDs. + +Note that fields like `generation` or `resourceVersion` are not relevant for any of the sync logic. + +### Reconcile Loop + +The sync loop can be divided into 5 parts: + +1. find the local object +2. handle deletion +3. ensure the destination object exists +4. ensure the destination object's content matches the source object +5. synchronize related resources the same way (repeat 1-4 for each related resource) + +#### Phase 1: Find the Local Object + +For this, as mentioned in the connection chapter above, the Servlet tries to follow label selectors +on the local cluster. This helps prevent cluttering with consumer workspaces with KDP metadata. +If no object is found to match the labels, that's fine and the loop will continue with phase 2, +in which a possible Conflict error (if labels broke) is handled gracefully. + +The remote object in the workspace becomes the `source object` and its local equivalent is called +the `destination object`. + +#### Phase 2: Handle Deletion + +A finalizer is used in the platform workspaces to prevent orphans in the service cluster side. This +is the only real evidence in the platform side that the Servlet is even doing things. When a remote +(source) object is deleted, the corresponding local object is deleted as well. Once the local object +is gone, the finalizer is removed from the source object. + +#### Phase 3: Ensure Object Existence + +We have a source object and now need to create the destination. This chart shows what's happening. + +```mermaid +graph TB + A(source object):::state --> B([cleanup if in deletion]):::step + B --> C([ensure finalizer on source object]):::step + C --> D{exists local object?} + + D -- yes --> I("continue with next phase…"):::state + D -- no --> E([apply projection]):::step + + subgraph "ensure dest object exists" + E --> G([ensure resulting namespace exists]):::step + G --> H([create local object]):::step + H --> H_err{Errors?} + H_err -- Conflict --> J([attempt to adopt existing object]):::step + end + + H_err -- success --> I + J --> I + + classDef step color:#77F + classDef state color:#F77 +``` + +After we followed through with these steps, both the source and destination objects exists and we +can continue with phase 4. + +Resource adoption happens when creation of the initial local object fails. This can happen when labels +get mangled. If such a conflict happens, the Servlet will "adopt" the existing local object by +adding / fixing the labels on it, so that for the next reconciliation it will be found and updated. + +#### Phase 4: Content Synchronization + +Content synchronization is rather simple, really. + +First the source "spec" is used to patch the local object. Note that this step is called "spec", but +should actually be called "all top-level elements besides `apiVersion`, `kind`, `status` and +`metadata`, but still including some labels and annotations"; so if you were to publish RBAC objects, +the syncer would include `roleRef` field, for example). + +To allow proper patch generation, a `last-known-state` annotation is kept on the local object. This +functions just like the one kubectl uses and is required for the Servlet to properly detect changes +made by mutation webhooks. + +If the published resource (CRD) has a `status` subresource enabled (not just a `status` field in its +scheme, it must be a real subresource), then the Servlet will copy the status from the local object +back up to the remote (source) object. + +#### Phase 5: Sync Related Resources + +The same logic for synchronizing the main published resource applies to their related resources as +well. The only difference is that the source side can be either remote (workspace) or local +(service cluster). + +This currently also means that sync-related metadata, which is always kept on the object's copy, +will end up in the user workspace when a related object originates on the service cluster (the +most common usecase). In a future version it could be nice to keep the sync state only on the +service cluster side, away from the users. +# Publishing resources with Crossplane + +This guide describes the process of leveraging Crossplane as a service provider to make Crossplane +claims available as `PublishedResources` for use in KDP. This involves installing Crossplane - +including all required Crossplane [providers][crossplane/docs/providers] and +[configuration packages][crossplane/docs/configurations] - and +[publishing]({{< relref "../publish-resources" >}}) (a subset of) the Crossplane claims. + +## Overview + +The KDP [Servlet]({{< relref "../servlet" >}}) is responsible for synchronizing objects from KDP to +the local service cluster where the service provider is in charge of processing these synchronized +objects to provide the actual functionality of a service. One possibility is to leverage Crossplane +to create new abstractions and custom APIs, which can be published to KDP and consumed by platform +users. + +> [!NOTE] +> While this guide is not intended to be a comprehensive Crossplane guide, it is useful to be aware +> of the most common terms: +> +> * **Providers** are pluggable building blocks to provision and manage resources via a third-party API (e.g. AWS provider) +> * **Managed resources** (MRs) are representations of actual, provider-specific resources (e.g. EC2 instance) +> * **Composite resource definitions** (XRDs) are Crossplane-specific definitions of API resources (similar to CRDs) +> * **Composite resources** (XRs) and **Claims** are Crossplane-specific custom resources created from XRD objects (similar to CRs) +> * **Compositions** are Crossplane-specific templates for transforming a XR object into one or more MR object(s) + +This guide will show you how to install Crossplane and all required providers on a service cluster +and provide a stripped-down `Certificate` resource in KDP. While we ultimately use cert-manager to +provide the actual TLS certificates, we will expose only a very limited number of fields of the +cert-manager `Certificate` to the platform users - in fact a single field to set the desired common +name. + +> [!NOTE] +> The [Upbound marketplace][upbound/marketplace/configurations] provides a list of available +> configuration packages (reusable packages of compositions and XRDs), but at the time of writing +> no suitable configuration package that relies only on the Kubernetes / Helm provider and works +> out of the box was available. + +## Install Crossplane + +First we need to install Crossplane via the [official Helm chart][crossplane/github/chart]. By +default, Crossplane does not require any special configuration so we will just use the default +values provided by the Helm chart. + +```bash +helm upgrade crossplane crossplane \ + --install \ + --create-namespace \ + --namespace=crossplane-system \ + --repo=https://charts.crossplane.io/stable \ + --version=1.15.0 \ + --wait +``` + +Once the installation is done, verify the status with the following command: + +```bash +$ kubectl get pods --namespace=crossplane-system +NAME READY STATUS RESTARTS AGE +crossplane-6494656b8b-bflcf 1/1 Running 0 45s +crossplane-rbac-manager-8458557cdd-sls58 1/1 Running 0 45s +``` + +## Install Crossplane providers + +With Crossplane up and running, we can continue and install the necessary Crossplane packages +(providers), composite resource definitions, and compositions. + +In order to manage arbitrary Kubernetes objects with Crossplane (and leverage cert-manager to +issue TLS certificates), we are going to install the `provider-kubernetes` on the service cluster. +Additionally (and for the sake of simplicity), we create a `DeploymentRuntimeConfig` to assign the +provider a specific service account, which can be used to assign the required permissions. + +```bash +kubectl apply --filename=- < + cluster-issuer.yaml + +```yaml +--- +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: default-bootstrap-ca + namespace: cert-manager +spec: + selfSigned: {} +--- +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + name: default-ca + namespace: cert-manager +spec: + isCA: true + commonName: default-ca + secretName: default-ca + privateKey: + algorithm: ECDSA + size: 256 + issuerRef: + group: cert-manager.io + kind: Issuer + name: default-bootstrap-ca +--- +apiVersion: cert-manager.io/v1 +kind: ClusterIssuer +metadata: + name: default-ca +spec: + ca: + secretName: default-ca +``` + + +
+ definition.yaml + +```yaml +apiVersion: apiextensions.crossplane.io/v1 +kind: CompositeResourceDefinition +metadata: + name: xcertificates.pki.xaas.k8c.io +spec: + group: pki.xaas.k8c.io + names: + kind: XCertificate + plural: xcertificates + claimNames: + kind: Certificate + plural: certificates + connectionSecretKeys: + - ca.crt + - tls.crt + - tls.key + versions: + - name: v1alpha1 + served: true + referenceable: true + schema: + openAPIV3Schema: + type: object + properties: + spec: + type: object + required: + - parameters + properties: + parameters: + type: object + required: + - commonName + properties: + commonName: + description: "Requested common name X509 certificate subject attribute. More info: https://datatracker.ietf.org/doc/html/rfc5280#section-4.1.2.6 NOTE: TLS clients will ignore this value when any subject alternative name is set (see https://tools.ietf.org/html/rfc6125#section-6.4.4). \n Should have a length of 64 characters or fewer to avoid generating invalid CSRs. Cannot be set if the `literalSubject` field is set." + type: string + minLength: 1 +``` +
+ +
+ composition.yaml + +```yaml +apiVersion: apiextensions.crossplane.io/v1 +kind: Composition +metadata: + name: v1alpha1.xcertificates.cert-manager.pki.xaas.k8c.io + labels: + xaas.k8c.io/provider-name: cert-manager +spec: + compositeTypeRef: + apiVersion: pki.xaas.k8c.io/v1alpha1 + kind: XCertificate + resources: + - name: certificate + base: + apiVersion: kubernetes.crossplane.io/v1alpha2 + kind: Object + spec: + forProvider: + manifest: + apiVersion: cert-manager.io/v1 + kind: Certificate + spec: + issuerRef: + group: cert-manager.io + kind: ClusterIssuer + name: default-ca + readiness: + policy: DeriveFromObject + providerConfigRef: + name: in-cluster + connectionDetails: + - apiVersion: v1 + kind: Secret + namespace: __PATCHED__ + name: __PATCHED__ + fieldPath: data['ca.crt'] + toConnectionSecretKey: ca.crt + - apiVersion: v1 + kind: Secret + namespace: __PATCHED__ + name: __PATCHED__ + fieldPath: data['tls.crt'] + toConnectionSecretKey: tls.crt + - apiVersion: v1 + kind: Secret + namespace: __PATCHED__ + name: __PATCHED__ + fieldPath: data['tls.key'] + toConnectionSecretKey: tls.key + writeConnectionSecretToRef: + namespace: crossplane-system + patches: + # spec.forProvider.manifest.metadata + - type: FromCompositeFieldPath + fromFieldPath: spec.claimRef.namespace + toFieldPath: spec.forProvider.manifest.metadata.namespace + policy: + fromFieldPath: Required + # spec.forProvider.manifest.spec + - type: FromCompositeFieldPath + fromFieldPath: spec.parameters.commonName + toFieldPath: spec.forProvider.manifest.spec.commonName + policy: + fromFieldPath: Required + - type: FromCompositeFieldPath + fromFieldPath: metadata.name + toFieldPath: spec.forProvider.manifest.spec.secretName + policy: + fromFieldPath: Required + # spec.connectionDetails + - type: FromCompositeFieldPath + fromFieldPath: spec.claimRef.namespace + toFieldPath: spec.connectionDetails[*].namespace + policy: + fromFieldPath: Required + - type: FromCompositeFieldPath + fromFieldPath: metadata.name + toFieldPath: spec.connectionDetails[*].name + policy: + fromFieldPath: Required + # spec.writeConnectionSecretToRef + - type: FromCompositeFieldPath + fromFieldPath: metadata.uid + toFieldPath: spec.writeConnectionSecretToRef.name + policy: + fromFieldPath: Required + transforms: + - type: string + string: + type: Format + fmt: "%s-certificate" + connectionDetails: + - name: ca.crt + type: FromConnectionSecretKey + fromConnectionSecretKey: ca.crt + - name: tls.crt + type: FromConnectionSecretKey + fromConnectionSecretKey: tls.crt + - name: tls.key + type: FromConnectionSecretKey + fromConnectionSecretKey: tls.key + writeConnectionSecretsToNamespace: crossplane-system +``` +
+ +Afterwards verify the status of the composite resource definition and the composition with the +following command: + +```bash +$ kubectl get compositeresourcedefinitions,compositions +NAME ESTABLISHED OFFERED AGE +xcertificates.pki.xaas.k8c.io True True 10s + +NAME XR-KIND XR-APIVERSION AGE +v1alpha1.xcertificates.cert-manager.pki.xaas.k8c.io XCertificate pki.xaas.k8c.io/v1alpha1 17s +``` + +Additionally before we continue and publish our `Certificate` resource to KDP, you can verify that +everything is working as expected on the service cluster by applying the following example +certificate manifest: + +```bash +kubectl apply --filename=- < (Claim)") --> B("XCertificate
(XR)") + C("v1alpha1.xcertificate
(Composition)") --> B --> C + end + + subgraph "provider-kubernetes" + D("Object
(MR)") + end + + subgraph "cert-manager" + E(Certificate) + end + + C --> D --> E +``` + +Now `provider-kubernetes` will wait for the secret containing the actual signed TLS certificate +issued by cert-manager, copy it into an intermediate secret (connection secret) in the +`crossplane-system` namespace for further processing, that will be picked up by Crossplane, which +will copy the information into the secret (combined secret) defined in the `Certificate` object by +`spec.writeConnectionSecretToRef.name` (phew you made it). + +```mermaid +graph RL + subgraph "Crossplane" + A("Secret
(Combined secret)") + end + + subgraph "provider-kubernetes" + B("Secret
(Connection secret)") + end + + subgraph "cert-manager" + C("Secret
(TLS certificate)") + end + + A --> B --> C +``` + +If everything worked out, you should get all relevant objects with the following command: + +```bash +$ kubectl get claim,composite,managed,certificate +NAME SYNCED READY CONNECTION-SECRET AGE +certificate.pki.xaas.k8c.io/www-example-com True True www-example-com 21m + +NAME SYNCED READY COMPOSITION AGE +xcertificate.pki.xaas.k8c.io/www-example-com-z59kn True True v1alpha1.xcertificates.cert-manager.pki.xaas.k8c.io 21m + +NAME KIND PROVIDERCONFIG SYNCED READY AGE +object.kubernetes.crossplane.io/www-example-com-z59kn-8wcmd Certificate in-cluster True True 21m + +NAME READY SECRET AGE +certificate.cert-manager.io/www-example-com-z59kn-8wcmd True www-example-com-z59kn 21m +``` + +## Publish Crossplane claims + +Now onto the final step: making our custom `Certificate` available in KDP. This can be achieved by +simply applying the following manifest to the service cluster. + +```bash +kubectl apply --filename=- <<'EOF' +apiVersion: services.kdp.k8c.io/v1alpha1 +kind: PublishedResource +metadata: + name: v1alpha1.certificate.pki.xaas.k8c.io +spec: + naming: + name: $remoteName + namespace: certs-$remoteClusterName-$remoteNamespaceHash + related: + - kind: Secret + origin: service + reference: + name: + path: spec.writeConnectionSecretToRef.name + resource: + apiGroup: pki.xaas.k8c.io + kind: Certificate + version: v1alpha1 +EOF +``` + +And done! The Servlet will pick up the `PublishedResource` object, set up the corresponding kcp +`APIExport` and `APIResourceSchema` and begin syncing objects from KDP to your service cluster. + +For more information, see the guide on [publishing resources]({{< relref "../publish-resources" >}}). + +[cert-manager/github/chart]: https://github.com/cert-manager/cert-manager/tree/v1.14.2/deploy/charts/cert-manager +[crossplane/docs/providers]: https://docs.crossplane.io/latest/concepts/providers/ +[crossplane/docs/configurations]: https://docs.crossplane.io/latest/concepts/packages/ +[crossplane/github/chart]: https://github.com/crossplane/crossplane/tree/v1.15.0/cluster/charts/crossplane +[upbound/marketplace/configurations]: https://marketplace.upbound.io/configurations diff --git a/content/kdp/service-providers/publish-resources/_index.en.md b/content/kdp/service-providers/publish-resources/_index.en.md new file mode 100644 index 000000000..91737c054 --- /dev/null +++ b/content/kdp/service-providers/publish-resources/_index.en.md @@ -0,0 +1,496 @@ ++++ +title = "Publishing Resources" +weight = 1 ++++ + +The guide describes the process of making a resource (usually defined by a CustomResourceDefinition) +of one Kubernetes cluster (the "service cluster" or "local cluster") available for use in the KDP +platform (the "platform cluster" or "KDP workspaces"). This involves setting up a KDP Service and +then installing the KDP Servlet and defining `PublishedResources` in the local cluster. + +All of the documentation and API types are worded and named from the perspective of a service owner, +the person(s) who own a service and want to make it available to consumers in the KDP platform. + +## High-level Overview + +A "service" in KDP comprises a set of resources within a single Kubernetes API group. It doesn't +need to be _all_ of the resources in that group, service owners are free and encouraged to only make +a subset of resources (i.e. a subset of CRDs) available for use in the platform. + +For each of the CRDs on the service cluster that should be published, the service owner creates a +`PublishedResource` object, which will contain both which CRD to publish, as well as numerous other +important settings that influence the behaviour around handling the CRD. + +When publishing a resource (CRD), exactly one version is published. All others are ignored from the +standpoint of the resource synchronization logic. + +All published resources together form the KDP Service. When a service is enabled in a workspace +(i.e. it is bound to it), users can manage objects for the projected resources described by the +published resources. These objects will be synced from the workspace onto the service cluster, +where they are meant to be processed in whatever way the service owners desire. Any possible +status information (in the `status` subresource) will in turn be synced back up into the workspace +where the user can inspect it. + +Additionally, a published resource can describe additional so-called "related resources". These +usually originate on the service cluster and could be for example connection detail secrets created +by Crossplane, but could also originate in the user workspace and just be additional, auxiliary +resources that need to be synced down to the service cluster. + +### `PublishedResource` + +In its simplest form (which is rarely practical) a `PublishedResource` looks like this: + +```yaml +apiVersion: services.kdp.k8c.io/v1alpha1 +kind: PublishedResource +metadata: + name: publish-certmanager-certs # name can be freely chosen +spec: + resource: + kind: Certificate + apiGroup: cert-manager.io + version: v1 +``` + +However, you will most likely apply more configuration and use features described below. + +### Filtering + +The Servlet can be instructed to only work on a subset of resources in the KDP platform. This +can be restricted by namespace and/or label selector. + +```yaml +apiVersion: services.kdp.k8c.io/v1alpha1 +kind: PublishedResource +metadata: + name: publish-certmanager-certs # name can be freely chosen +spec: + resource: ... + filter: + namespace: my-app + resource: + matchLabels: + foo: bar +``` + +### Schema + +**Warning:** The actual CRD schema is always copied verbatim. All projections +etc. have to take into account that the resource contents must be expressible without changes to the +schema. + +### Projection + +For stronger separation of concerns and to enable whitelabelling of services, the type meta for +can be projected, i.e. changed between the local service cluster and the KDP platform. You could +for example rename `Certificate` from cert-manager to `Zertifikat` inside the platform. + +Note that the API group of all published resources is always changed to the one defined in the +KDP `Service` object (meaning 1 Servlet serves all the [selected] published resources under the +same API group). That is why changing the API group cannot be configured in the projection. + +Besides renaming the Kind and Version, dependent fields like Plural, ShortNames and Categories +can be adjusted to fit the desired naming scheme in the platform. The Plural name is computed +automatically, but can be overridden. ShortNames and Categories are copied unless overwritten in the +`PublishedResource`. + +It is also possible to change the scope of resources, i.e. turning a namespaced resource into a +cluster-wide. This should be used carefully and might require extensive mutations. + +```yaml +apiVersion: services.kdp.k8c.io/v1alpha1 +kind: PublishedResource +metadata: + name: publish-certmanager-certs # name can be freely chosen +spec: + resource: ... + projection: + version: v1beta1 + kind: Zertifikat + plural: Zertifikate + shortNames: [zerts] + # categories: [management] + # scope: Namespaced # change only when you know what you're doing +``` + +Consumers (end users) in the platform would then ultimately see projected names only. Note that GVK +projection applies only to the synced object itself and has no effect on the contents of these +objects. To change the contents, use external solutions like Crossplane to transform objects. + + +### Naming + +Since the Servlet ingests resources from many different Kubernetes clusters (workspaces) and combines +them onto a single cluster, resources have to be renamed to prevent collisions and also follow the +conventions of whatever tooling ultimately processes the resources locally. + +The renaming is configured in `spec.naming`. In there, renaming patterns are configured, where +pre-defined placeholders can be used, for example `foo-$placeholder`. The following placeholders +are available: + +* `$remoteClusterName` – the KDP workspace's cluster name (e.g. "1084s8ceexsehjm2") +* `$remoteNamespace` – the original namespace used by the consumer inside the KDP workspace +* `$remoteNamespaceHash` – first 20 hex characters of the SHA-1 hash of `$remoteNamespace` +* `$remoteName` – the original name of the object inside the KDP workspace (rarely used to construct + local namespace names) +* `$remoteNameHash` – first 20 hex characters of the SHA-1 hash of `$remoteName` + +If nothing is configured, the default ensures that no collisions will happen: Each workspace in +the platform will create a namespace on the local cluster, with a combination of namespace and +name hashes used for the actual resource names. + +```yaml +apiVersion: services.kdp.k8c.io/v1alpha1 +kind: PublishedResource +metadata: + name: publish-certmanager-certs # name can be freely chosen +spec: + resource: ... + naming: + namespace: "$remoteClusterName" + name: "cert-$remoteNamespaceHash-$remoteNameHash" +``` + + + +### Related Resources + +The processing of resources on the service cluster often leads to additional resources being +created, like a `Secret` for each cert-manager `Certificate` or a connection detail secret created +by Crossplane. These need to be made available to the user in their workspaces. + +Likewise it's possible for auxiliary resources having to be created by the user, for example when +the user has to provide credentials. + +To handle these cases, a `PublishedResource` can define multiple "related resources". Each related +resource currently represents exactly one object to synchronize between user workspace and service +cluster (i.e. you cannot express "sync all Secrets"). While the main published resource sync is +always workspace->service cluster, related resources can originate on either side and so either can +work as the source of truth. + +At the moment, only `ConfigMaps` and `Secrets` are allowed related resource kinds. + +For each related resource, the servlet needs to be told the name/namespace. This is done by selecting +a field in the main resource (for a `Certificate` this would mean `spec.secretName`). Both name and +namespace need to be part of the main object (or be fixed values, like a hardcoded `kube-system` +namespace). + +The path expressions for name and namespace are evaluated against the main object on either side +to determine their values. So if you had a `Certificate` in your workspace with +`spec.secretName = "my-cert"` and after syncing it down, the copy on the service cluster has a +rewritten/mutated `spec.secretName = "jk23h4wz47329rz2r72r92-cert"` (e.g. to prevent naming +collisions), the expression `spec.secretName` would yield `"my-cert"` for the name in the workspace +and `"jk...."` as the name on the service cluster. Once the object exists with that name on the +originating side, the servlet will begin to sync it to the other side. + +```yaml +apiVersion: services.kdp.k8c.io/v1alpha1 +kind: PublishedResource +metadata: + name: publish-certmanager-certs +spec: + resource: + kind: Certificate + apiGroup: cert-manager.io + version: v1 + + naming: + # this is where our CA and Issuer live in this example + namespace: kube-system + # need to adjust it to prevent collions (normally clustername is the namespace) + name: "$remoteClusterName-$remoteNamespaceHash-$remoteNameHash" + + related: + - origin: service # service or platform + kind: Secret # for now, only "Secret" and "ConfigMap" are supported; + # there is no GVK projection for related resources + + # configure where in the parent object we can find + # the name/namespace of the related resource (the child) + reference: + name: + # This path is evaluated in both the local and remote objects, to figure out + # the local and remote names for the related object. This saves us from having + # to remember mutated fields before their mutation (similar to the last-known + # annotation). + path: spec.secretName + + # namespace part is optional; if not configured, + # servlet assumes the same namespace as the owning resource + # + # namespace: + # path: spec.secretName + # regex: + # pattern: '...' + # replacement: '...' + # + # to inject static values, select a meaningless string value + # and leave the pattern empty + # + # namespace: + # path: metadata.uid + # regex: + # replacement: kube-system +``` + +## Examples + +### Provide Certificates + +This combination of `Service` and `PublishedResource` make cert-manager certificates available in +kcp. The `Service` needs to be created in a workspace, most likely in an organization workspace. +The `PublishedResource` is created wherever the Servlet and cert-manager are running. + +```yaml +apiVersion: core.kdp.k8c.io/v1alpha1 +kind: Service +metadata: + name: certificate-management +spec: + apiGroup: certificates.example.corp + catalogMetadata: + title: Certificate Management + description: Acquire certificates signed by Example Corp's internal CA. +``` + +```yaml +apiVersion: services.kdp.k8c.io/v1alpha1 +kind: PublishedResource +metadata: + name: publish-certmanager-certs +spec: + resource: + kind: Certificate + apiGroup: cert-manager.io + version: v1 + + naming: + # this is where our CA and Issuer live in this example + namespace: kube-system + # need to adjust it to prevent collions (normally clustername is the namespace) + name: "$remoteClusterName-$remoteNamespaceHash-$remoteNameHash" + + related: + - origin: service # service or platform + kind: Secret # for now, only "Secret" and "ConfigMap" are supported; + # there is no GVK projection for related resources + + # configure where in the parent object we can find + # the name/namespace of the related resource (the child) + reference: + name: + # This path is evaluated in both the local and remote objects, to figure out + # the local and remote names for the related object. This saves us from having + # to remember mutated fields before their mutation (similar to the last-known + # annotation). + path: spec.secretName + # namespace part is optional; if not configured, + # servlet assumes the same namespace as the owning resource + # namespace: + # path: spec.secretName + # regex: + # pattern: '...' + # replacement: '...' +``` + +## Technical Details + +The following sections go into more details of the behind the scenes magic. + +### Synchronization + +Even though the whole configuration is written from the standpoint of the service owner, the actual +synchronization logic considers the platform side as the canonical source of truth. The Servlet +continuously tries to make the local objects look like the ones in the platform, while pushing +status updates back into the platform (if the given `PublishedResource` (i.e. CRD) has a `status` +subresource enabled). + +### Local <-> Remote Connection + +The Servlet tries to keep KDP-related metadata on the service cluster, away from the consumers. This +is both to prevent vandalism and to hide implementation details. + +To ensure stability against future changes, once KDP has determined how a local object should be +named, it will remember this decision in its metadata. This is so that on future reconciliations, +the (potentially costly, but probably not) renaming logic does not need to be applied again. This +allows the Servlet to change defaults and also allows the service owner to make changes to the +naming rules without breaking existing objects. + +Since we do not want to store metadata on the platform side, we instead rely on label selectors on +the local objects. Each local object has a label for the remote cluster name, namespace and object +name, and when trying to find the matching local object, the Servlet simply does a label-based +search. + +There is currently no sync-related metadata available on source objects, as this would either be +annotations (untyped strings...) or require schema changes to allow additional fields in basically +random CRDs. + +Note that fields like `generation` or `resourceVersion` are not relevant for any of the sync logic. + +### Reconcile Loop + +The sync loop can be divided into 5 parts: + +1. find the local object +2. handle deletion +3. ensure the destination object exists +4. ensure the destination object's content matches the source object +5. synchronize related resources the same way (repeat 1-4 for each related resource) + +#### Phase 1: Find the Local Object + +For this, as mentioned in the connection chapter above, the Servlet tries to follow label selectors +on the local cluster. This helps prevent cluttering with consumer workspaces with KDP metadata. +If no object is found to match the labels, that's fine and the loop will continue with phase 2, +in which a possible Conflict error (if labels broke) is handled gracefully. + +The remote object in the workspace becomes the `source object` and its local equivalent is called +the `destination object`. + +#### Phase 2: Handle Deletion + +A finalizer is used in the platform workspaces to prevent orphans in the service cluster side. This +is the only real evidence in the platform side that the Servlet is even doing things. When a remote +(source) object is deleted, the corresponding local object is deleted as well. Once the local object +is gone, the finalizer is removed from the source object. + +#### Phase 3: Ensure Object Existence + +We have a source object and now need to create the destination. This chart shows what's happening. + +```mermaid +graph TB + A(source object):::state --> B([cleanup if in deletion]):::step + B --> C([ensure finalizer on source object]):::step + C --> D{exists local object?} + + D -- yes --> I("continue with next phase…"):::state + D -- no --> E([apply projection]):::step + + subgraph "ensure dest object exists" + E --> G([ensure resulting namespace exists]):::step + G --> H([create local object]):::step + H --> H_err{Errors?} + H_err -- Conflict --> J([attempt to adopt existing object]):::step + end + + H_err -- success --> I + J --> I + + classDef step color:#77F + classDef state color:#F77 +``` + +After we followed through with these steps, both the source and destination objects exists and we +can continue with phase 4. + +Resource adoption happens when creation of the initial local object fails. This can happen when labels +get mangled. If such a conflict happens, the Servlet will "adopt" the existing local object by +adding / fixing the labels on it, so that for the next reconciliation it will be found and updated. + +#### Phase 4: Content Synchronization + +Content synchronization is rather simple, really. + +First the source "spec" is used to patch the local object. Note that this step is called "spec", but +should actually be called "all top-level elements besides `apiVersion`, `kind`, `status` and +`metadata`, but still including some labels and annotations"; so if you were to publish RBAC objects, +the syncer would include `roleRef` field, for example). + +To allow proper patch generation, a `last-known-state` annotation is kept on the local object. This +functions just like the one kubectl uses and is required for the Servlet to properly detect changes +made by mutation webhooks. + +If the published resource (CRD) has a `status` subresource enabled (not just a `status` field in its +scheme, it must be a real subresource), then the Servlet will copy the status from the local object +back up to the remote (source) object. + +#### Phase 5: Sync Related Resources + +The same logic for synchronizing the main published resource applies to their related resources as +well. The only difference is that the source side can be either remote (workspace) or local +(service cluster). + +This currently also means that sync-related metadata, which is always kept on the object's copy, +will end up in the user workspace when a related object originates on the service cluster (the +most common usecase). In a future version it could be nice to keep the sync state only on the +service cluster side, away from the users. diff --git a/content/kdp/service-providers/servlet/_index.en.md b/content/kdp/service-providers/servlet/_index.en.md new file mode 100644 index 000000000..740abadbe --- /dev/null +++ b/content/kdp/service-providers/servlet/_index.en.md @@ -0,0 +1,170 @@ ++++ +title = "The KDP Servlet" +linkTitle = "The Servlet" +weight = 3 ++++ + +The Servlet is the component in KDP responsible for integrating external Kubernetes clusters. +It runs on a cluster, is configured with KDP credentials and will then synchronize data out +of KDP (i.e. out of kcp workspaces) onto the local cluster, and vice versa. + +The name Servlet is an obvious reference to the "kubelet" in a regular Kubernetes cluster. + +## High-level Overview + +The intended usecase follows roughly these steps: + +1. A user in KDP with sufficient permissions creates a `Service` inside their organization + workspace. This service (not to be confused with Kubernetes services) reserves an API group + in the organization for itself, like `databases.example.corp` (two `Services` must not register + the same API group). +2. After the `Service` is created, KDP will reconcile it and provide appropriate credentials + for the Servlet (e.g. by creating a Kubernetes Secret with a preconfigured kubeconfig in it). +3. A service owner will now take these credentials and the configured API group and use them + to setup the Servlet. It is assumed that the service owner (i.e. the cluster-admin in a + service cluster) wants to make some resources (usually CRDs) available to use inside of KDP. +4. The service owner uses the Servlet Helm chart (or similar deployment technique) to install the + Servlet in their cluster. This in itself won't do much besides registering the Servlet in the + platform by setting a `status` on the Service object (there might also be a need to not just + have the `Service` object, but also a Servlet object). +5. To actually make resources available in the platform, the service owner now has to create a + set of `PublishedResource` objects. The configuration happens from their point of view, meaning + they define how to publish a CRD in the platform, defining renaming rules and other projection + settings. +6. Once a `PublishedResource` is created in the service cluster, the Servlet will pick it up, + find the referenced CRD, convert/project this CRD into an `APIResourceSchema` (ARS) for kcp and + then create the ARS in org workspace. +7. Finally the Servlet will take all `PublishedResources` and bundle them into a single `APIExport` + in the org workspace. This APIExport can then be bound in the org workspace itself (or later + any sub workspaces (depending on permissions)) and be used there. The `APIExport` has the same + name as the KDP `Service` the Servlet is working with. +8. kcp automatically provides a virtual workspace for the `APIExport` and this is what the Servlet + then uses to watch all objects for the relevant resources in the platform (i.e. in all workspaces). +9. The Servlet will now begin to synchronize objects back and forth between the service cluster + and KDP. + +## Details + +### Data Flow Direction + +It might be a bit confusing at first: The `PublishedResource` CRD describes the world from the +standpoint of a service owner, i.e. a person or team that owns a Kubernetes cluster and is tasked +with making their CRDs available in KDP (i.e. "publish" them). + +However the actual data flow later will work in the opposite direction: users creating objects inside +their kcp workspaces serve as the source of truth. From there they are synced down to the service +cluster, which is doing the projection of the `PublishedResource` _in reverse_. + +Of course additional, auxiliary (related) objects could originate on the service cluster. For example +if you create a Certificate object in a kcp workspace and it's synced down, cert-manager will then +acquire the certificate and create a Kubernetes `Secret`, which will have to be synced back up (into +a kcp workspace, where the certificate originated from). So the source of truth can also be, for +auxiliary resources, on the service cluster. + +### Servlet Naming + +Each Servlet must have a name, like "tom" or "mary". The FQ name for a Servlet is +`.`, so if the user in KDP had created a new `Service` named +`databases.examplecorp`, the name of the Servlet that serves this Service (sic) could be +`tom.databases.examplecorp`. + +### Uniqueness + +A single `Service` in KDP must only be processed by exactly 1 Servlet. There is currently no mechanism +planned to subdivide a `Service` into chunks, where multiple service clusters (and therefore multiple +Servlets) could process each chunk. + +Later the Servlet might be extended with Label Selectors, alternatively they might also "claim" any +object by annotating it in the kcp workspace. These things are not yet worked out, so for now we have +this 1:1 restriction. + +Servlets make use of leader election, so it's perfectly fine to have multiple Servlet replicas, as +long as only one them is leader and actually doing work. + +### kcp-awareness + +controller-runtime can be used in a "kcp-aware" mode, where the cache, clients, mappers etc. are +aware of the workspace information. This however is neither well tested upstream and the code would +require shard-admin permissions to behave like this work regular kcp workspaces. The controller-runtime +fork's kcp-awareness is really more geared towards working in virtual workspaces. + +Because of this the Servlet needs to get a kubeconfig to KDP that already points to the org's +workspace (i.e. the `server` already contains a `/clusters/root:myorg` path). The basic controllers +in the Servlet then treat this as a plain ol', regular Kubernetes cluster (no kcp-awareness). + +To this end, the Servlet will, upon startup, try to access the `cluster` object in the target +workspace. This is to resolve the cluster name (e.g. `root:myorg`) into a logicalcluster name (e.g. +`gibd3r1sh`). The Servlet has to know which logicalcluster the target workspace represents in order +to query resources properly. + +Only the controllers that are later responsible for interacting with the virtual workspace are +kcp-aware. They have to be in order to know what workspace a resource is living in. + +### PublishedResources + +A `PublishedResource` describes which CRD should be made available inside KDP. The CRD name can be +projected (i.e. renamed), so a `kubermatic.k8c.io/v1 Cluster` can become a +`cloud.examplecorp/v1 KubernetesCluster`. + +In addition to projecting (mapping) the GVK, the `PublishedResource` also contains optional naming +rules, which influence how the local objects that the Servlet is creating are named. + +As a single Servlet serves a single Service, the API group used in KDP is the same for all +`PublishedResources`. It's the API group configured in the KDP `Service` inside the platform (created +in step 1 in the overview above). + +To prevent chaos, `PublishedResources` are immutable: handling the case that a PR first wants to +publish `kubermatic.k8c.io/v1 Cluster` and then suddenly `kubermatic.k8c.io/v1 User` resources would +mean to re-sync and cleanup everything in all affected kcp workspaces. The Servlet would need to be +able to delete and recreate objects to follow this GVK change, which is a level of complexity we +simply do not want to deal with at this point in time. Also, `APIResourceSchemas` are immutable +themselves. + +More information is available in the [Publishing Resources]({{< relref "../publish-resources" >}}) +guide. + +### APIExports + +An `APIExport` in kcp combines multiple `APIResourceSchemas` (ARS). Each ARS is created based on a +`PublishedResource` in the service cluster. + +To prevent data loss, ARS are never removed from an `APIExport`. We simply do not have enough +experience to really know what happens when an ARS would suddenly become unavailable. To prevent +damage and confusion, the Servlet will only ever add new ARS to the one `APIExport` it manages. + +## Controllers + +### apiexport + +This controller aggregates the `PublishedResources` and manages a single `APIExport` in KDP. + +### apiresourceschema + +This controller takes `PublishedResources`, projects and converts them and creates `APIResourceSchemas` +in KDP. + +### register + +This controller updates the status on the KDP `Service` object, to let the system know that a Servlet +has picked up the Service and is serving it. In the future this controller might also create/update +a `Servlet` object, akin to how the Kubernetes kubelet creates and maintains a `Node` object. + +### syncmanager + +This controller watches the `APIExport` and waits for the virtual workspace to become available. It +also watches all `PublishedResources` (PRs) and reconciles when any of them is changed (they are +immutable, but the controller is still reacting to any events on them). + +The controller will then setup a controller-runtime `Cluster` abstraction for the virtual workspace +and then start many `sync` controllers (one for each `PublishedResource`). Whenever PRs change, the +syncmanager will make sure that the correct set of `sync` controller is running. + +### sync + +This is where the meat and potatoes happen. The sync controller is started for a single +`PublishedResource` and is responsible for synchronizing all objects for that resource between the +local service cluster and KDP. + +The `sync` controller was written to handle a single `PublishedResource` so that it does not have to +deal with dynamically registering/stopping watches on its own. Instead the sync controller can be +written as more or less "normal" controller-runtime controller. diff --git a/content/kdp/tutorials/_index.en.md b/content/kdp/tutorials/_index.en.md new file mode 100644 index 000000000..7fff6be59 --- /dev/null +++ b/content/kdp/tutorials/_index.en.md @@ -0,0 +1,4 @@ ++++ +title = "Tutorials" +weight = 4 ++++ diff --git a/content/kdp/tutorials/kcp-command-line/_index.en.md b/content/kdp/tutorials/kcp-command-line/_index.en.md new file mode 100644 index 000000000..90e449af0 --- /dev/null +++ b/content/kdp/tutorials/kcp-command-line/_index.en.md @@ -0,0 +1,75 @@ ++++ +title = "kcp on the Command Line" +weight = 1 ++++ + +Interacting with KDP means interacting with kcp. Many platform operations like enabling Services, +creating workspaces, etc. are just manipulating kcp directly behind the scenes. + +Even though many day-to-day operations will happen within a single workspaces (like deploying or +managing an application), there will also be instances where you might need to interact with kcp +and switch between workspaces. This can be done manually by editing your kubeconfig accordingly, +but is much simpler with the [kcp kubectl plugin](https://docs.kcp.io/kcp/v0.22/concepts/kubectl-kcp-plugin/). +It provides additional commands for common operations in kcp. + +Make sure you have the kcp kubectl plugin installed. + +## Workspaces + +In kcp, workspaces are identified by a prefix in the APIServer path, for example +`https://127.0.0.1/clusters/root:foo` is a different workspace than `https://127.0.0.1/clusters/root:bar`. +Under both URLs will you find a regular Kubernetes API that supports service discovery and everything +else (and more!) you would expect. + +To make switching between workspaces (i.e. changing the `clusters.*.server` value) easier, the kcp +kubectl plugin provides the `ws` command, which is like a combination of `cd`, `mkdir` and `pwd`. +Take note of the following examples: + +```bash +export KUBECONFIG=/path/to/a/kubeconfig/that/points/to/kcp + +# go into the root workspace, no matter where you are +kubectl ws root + +# descend into the child namespace "foo" (if you are in root currently, +# you would end up in "root:foo") +kubectl ws foo + +# ...is the same as +kubectl ws root:foo + +# go one workspace up +kubectl ws .. + +# print current workspace +kubectl ws . + +# print a tree representation of workspaces +kubectl ws tree + +# create a sub workspace (note: this is a special case for the `ws` command, +# not to be confused with the not-working variant `kubectl create ws`) +kubectl ws create --type=… my-subworkspace + +# once this workspace is ready (a few seconds later), you could +kubectl ws my-subworkspace +``` + +## API Management + +A KDP Service is reconciled into an `APIExport`. To use this API, you have to _bind to_ it. Binding +involves creating a matching (= same name) `APIBinding` in the workspace where the API should be +made available. + +Note that you cannot have 2 `APIExports` that both provide an API `foo.example.com` enabled in the +same workspace. + +Binding to an `APIExport` can be done using the kcp kubectl plugin: + +```bash +# kubectl kcp bind apiexport : +kubectl kcp bind apiexport root:my-org:my.fancy.api +``` + +More information on binding APIs can be found in +[Using Services]({{< relref "../../platform-users/consuming-services" >}}). diff --git a/content/kdp/tutorials/your-first-service/_index.en.md b/content/kdp/tutorials/your-first-service/_index.en.md new file mode 100644 index 000000000..27689df8d --- /dev/null +++ b/content/kdp/tutorials/your-first-service/_index.en.md @@ -0,0 +1,69 @@ ++++ +title = "Your First Service" +weight = 2 ++++ + +A "service" in KDP defines a unique Kubernetes API Group and offers a number of resources (types) to +use. A service could offer certificate management, databases, cloud infrastructure or any other set +of Kubernetes resources. + +Services are provided by service owners, who run their own Kubernetes clusters and take care of the +maintenance and scaling tasks for the workload provisioned by all users of the service(s) they +offer. + +A KDP Service should not be confused with a Kubernetes Service. Internally, a KDP Service is +ultimately translated into a kcp `APIExport` with a number of `APIResourceSchemas` (~ CRDs). + +## Scope + +This document describes the process of setting up a new Service from scratch, from the standpoint +of a Service Owner. See [Using Services]({{< relref "../../platform-users/consuming-services" >}}) +for more information from the standpoint of Service Consumers (i.e. endusers). + +## Creating Services + +Login to the KDP Dashboard and navigate to the organization where the Service should be available +(generally, Services can be consumed from any other workspace). Choose the "Create Service" option. + +To define a new service you must choose (i.e. make up) a unique API Group and can then also specify +a more descriptive name and a short description for your service. All of these will help platform +users find and consume the service later. + +This concludes all required steps to define the new service. Click on the confirm button to create +the Service. + +## Setting up the Servlet + +Once the service is created, KDP will provision a kubeconfig. This kubeconfig is meant to be used +by the KDP Servlet, the component installed by service owners into the service cluster. + +(TODO: Make this kubeconfig available via the KDP Dashboard.) You can also use `kubectl` to navigate +into your workspace and inspect your Service object. In `spec.kubeconfig` you will find the name of +the kubeconfig Secret that you can use for your Servlet. + +Now switch your focus to your own cluster, where your business logic happens (for example where +Crossplane runs). For your Service you need to provide exactly _one_ Servlet in _one_ Kubernetes +cluster. This Servlet can have multiple replicas as it uses leader election, but you must not have +two or more independent Servlets processing the same Service. There is currently no mechanism to +spread load between multiple Service clusters and two or more Servlets will most likely conflict +with each other. + +The Servlet comes as a Helm chart. See `deploy/charts/servlet` for more information. You basically +need to provide the kubeconfig generated by KDP as the "platform kubeconfig", the service's name +(not its API Group) and a unique name for the Servlet itself. Put all the information in a +`values.yaml` and run `helm install` to deploy your Servlet. + +Once the Servlet has booted up, it will just sit there and do nothing, waiting for further +configuration. + +## Defining Published Resources + +Once the Servlet is up and running, you have to create `PublishedResource` objects on the service +cluster. See the documentation for +[publishing resources]({{< relref "../../service-providers/publish-resources" >}}) for more +information. + +The Servlet will automatically react to `PublishedResources` and begin replicating them into KDP/kcp. +This means setting up an `APIExport` and `APIResourceSchemas`. Once this is done, the KDP Service +can actually be consumed on the platform. See +[Using Services]({{< relref "../../platform-users/consuming-services" >}}) for more information.