Skip to content

Commit

Permalink
add KDP content
Browse files Browse the repository at this point in the history
  • Loading branch information
xrstf committed Apr 10, 2024
1 parent d1feaf1 commit b591466
Show file tree
Hide file tree
Showing 14 changed files with 2,030 additions and 0 deletions.
56 changes: 56 additions & 0 deletions content/kdp/_index.en.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
+++
title = "Kubermatic Developer Platform"
sitemapexclude = true
+++

KDP (Kubermatic Developer Platform) is a new Kubermatic product in development that targets the IDP
(Internal Developer Platform) segment. This segment is part of a larger shift in the ecosystem to
"Platform Engineering", which champions the idea that DevOps in its effective form didn't quite work
out and that IT infrastructure needs new paradigms. The core idea of Platform Engineering is that
internal platforms provide higher-level services so that development teams no longer need to spend
time on operating components not core to their applications. These internal services are designed in
alignment with company policies and provide a customized framework for running applications and/or
their dependencies.

KDP offers a central control plane for IDPs by providing an API backbone that allows to register (as
service provider) and consume (as platform user) **services**. KDP itself does **not** host the
actual workloads providing such services (e.g. if a database service is offered, the underlying
PostgreSQL pods are not hosted in KDP) and instead delegates this to so-called **service clusters**.
A component called [**servlet**]({{< relref "service-providers/servlet" >}}) is installed onto service
clusters which allows service providers (who own the service clusters) to publish APIs from their
service cluster onto KDP's central platform.

KDP is based on [kcp](https://kcp.io), a CNCF Sandbox project to run many lightweight "logical"
clusters. Each of them acts as an independent Kubernetes API server to platform users and is called
a "Workspace". Workspaces are organized in a tree hierarchy, so there is a `root` workspace that has
child workspaces, and those can have child workspaces, and so on. In KDP, platform users own a certain
part of the workspace hierarchy (maybe just a single workspace, maybe a whole sub tree) and
self-manage those parts of the hierarchy that they own. This includes assigning permissions to
delegate certain tasks and subscribing to service APIs. Platform users can therefore "mix and match"
what APIs they want to have available in their workspaces to only consume the right services.

KDP is an automation/DevOps/GitOps-friendly product and is "API-driven". Since it exposes
Kubernetes-style APIs it can be used with a lot of existing tooling (e.g. `kubectl` works to manage
resources). We have decided against an intermediate API (like we have in KKP) and the KDP Dashboard
directly interacts with the Kubernetes APIs exposed by kcp. As such everything available from the
Dashboard will be available from the API. A way for service providers to plug in custom dashboard
logic is planned, but not realized yet.

Service APIs are not pre-defined by KDP, and as such are subject to API design in the actual
installation. Crossplane on the service cluster can be used to provide abstraction APIs that are then
reconciled to more complex resource bundles. The level of abstraction in an API is up to service
providers and will vary from setup to setup.

## Personas

KDP has several types of people that we identified as stakeholders in an Internal Developer Platform
based on KDP. Here is a brief overview:

- **Platform Users** are the end users (often application developers or "DevOps engineers") in an
IDP. They consume services (e.g. they want a database or they have a container image that they want
to be started), own workspaces and self-organize within those workspaces.
- **Service Providers** offer services to developers. They register APIs that they want to provide on
the service "marketplace" and they operate service clusters and controllers/operators on those
service clusters that actually provide the services in question.
- **Platform Owners** are responsible for keeping KDP itself available and assign top-level
permissions so that developers and service providers can then utilize self-service capabilities.
26 changes: 26 additions & 0 deletions content/kdp/faq/_index.en.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
+++
title = "Frequently Asked Questions"
linkTitle = "FAQ"
weight = 99
+++

## How does KDP relate to KKP3?

This product started out with early prototyping of KKP3 based on kcp. Focus was shifted to providing
an IDP product instead of "only" providing a Kubernetes cluster management solution.

## How does KDP relate to Backstage?

KDP occupies a similar space to [Backstage](https://backstage.io/) as a framework product to build
IDPs. KDP differentiatse from Backstage due to the strong API underpinning that is provided by the
Kubernetes-style API powering it. We looked at Backstage but found the process to integrate services
to be tedious and believe that KDP offers significant value for DevOps/GitOps workflows over Backstage.

## How can different services integrate with each other (e.g. a service running containers and a database service)?

Classic consulting answer: It depends. KDP is a backbone/framework for building your own platform,
but service architecture is out of scope for it. In the future we will likely provide "blueprints"
on how to build an IDP with KDP and popular services, but whether two services run on the same
physical Kubernetes cluster, all service clusters have mesh routing with each other, or if connection
details include routeable IP addresses (e.g. because load balancers are used to expose service
instances) is up to the platform owners and service providers to decide.
4 changes: 4 additions & 0 deletions content/kdp/platform-operators/_index.en.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
+++
title = "Platform Operators"
weight = 1
+++
28 changes: 28 additions & 0 deletions content/kdp/platform-operators/monitoring/_index.en.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
+++
title = "Monitoring"
weight = 1
+++

Monitoring for KDP is currently very basic. We deploy the
[kube-prometheus-stack](https://artifacthub.io/packages/helm/prometheus-community/kube-prometheus-stack)
Helm chart from the infra repository (see
[folder for deployment logic](https://github.com/kubermatic/infra/tree/main/clusters/platform/dev)),
but it basically only deploys prometheus-operator and Grafana. Default rules and dashboards are
omitted.

## Accessing Grafana

Grafana is currently not exposed. You will need to use port-forwarding to access it.

```sh
$ kubectl -n monitoring port-forward svc/prometheus-grafana 8080:80
```

Now it's accessible from [localhost:8080](http://localhost:8080). A datasource called "KDP" is added
to the list of datasources on Grafana, you want to use _that_ one.

## Dashboards

Currently, KDP ships the following dashboards:

- **KDP / System / API Server**: Basic API server metrics for kcp.
4 changes: 4 additions & 0 deletions content/kdp/platform-users/_index.en.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
+++
title = "Platform Users"
weight = 3
+++
99 changes: 99 additions & 0 deletions content/kdp/platform-users/consuming-services/_index.en.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
+++
title = "Consuming Services"
weight = 1
+++

This document describes how to use (consume) Services offered in KDP.

## Background

A "service" in KDP defines a unique Kubernetes API Group and offers a number of resources (types) to
use. A service could offer certificate management, databases, cloud infrastructure or any other set
of Kubernetes resources.

Services are provided by service owners, who run their own Kubernetes clusters and take care of the
maintenance and scaling tasks for the workload provisioned by all users of the service(s) they
offer.

A KDP Service should not be confused with a Kubernetes Service. Internally, a KDP Service is
ultimately translated into a kcp `APIExport` with a number of `APIResourceSchemas` (~ CRDs).

## Browsing Services

Login to the KDP Dashboard and choose your organization. Then select "Services" in the menu bar to
see a list of all available Services. This page also allows to create new services, which is
further described in [Your First Service]({{< relref "../../tutorials/your-first-service" >}}) for
service owners.

Note that every Service shows:

* its main title (the human-readable name of a Service, like "Certificate Management")
* its internal name (ultimately the name of the Kubernetes `Service` object you would need to
manually enable the service using `kubectl`)
* a short description

## Enabling a Service

Before a KPD Service can be used, it must be enabled in the workspace where it should be available.

### Dashboard

(TODO: currently the UI has no support for this.)

### Manually

Alternatively, create the `APIBinding` object yourself. This section assumes that you are familiar
with [kcp on the Command Line]({{< relref "../../tutorials/kcp-command-line" >}}) and have the kcp kubectl plugin installed.

First you need to get the kubeconfig for accessing your kcp workspaces. Once you have set your
kubeconfig up, make sure you're in the correct namespace by using
`kubectl ws <path to your workspace>`. Using `kubectl ws .` if you're unsure where you're at.

To enable a Service, use `kcp bind apiexport` and specify the path to and name of the `APIExport`.

```bash
# kubectl kcp bind apiexport <path to KDP Service>:<API Group of the Service>
kubectl kcp bind apiexport root:my-org:my.fancy.api
```

Without the plugin, you can create an `APIBinding` manually, simple `kubectl apply` this:

```yaml
apiVersion: apis.kcp.io/v1alpha1
kind: APIBinding
metadata:
name: my.fancy.api
spec:
reference:
export:
name: my.fancy.api
path: root:my-org
```
Shortly after, the new API will be available in the workspace. Check via `kubectl api-resources`.
You can now create objects for types in that API group to your liking and they will be synced and
processed behind the scenes.

Note that a Service often has related resources, often Secrets and ConfigMaps. You must explicitly
allow the Service to access these in your workspace and this means editing/patching the `APIBinding`
object (the kcp kubectl plugin currently has no support for managing permission claims). For each of
the claimed resources, you have to accept or reject them:

```yaml
spec:
permissionClaims:
# Nearly all Services in KDP require access to namespaces, rejecting this will
# most likely break the Service, even more than rejecting any other claim.
- all: true
resources: namespaces
state: Accepted
- all: true
resources: secrets
state: Accepted # or Rejected
```

Rejecting a claim will severely impact a Service, if not even break it. Consult with the Service's
documentation or the service owner if rejecting a claim is supported.

When you _change into_ (`kubctl ws …`) a different workspace, kubectl will inform you if there are
outstanding permission claims that you need to accept or reject.
83 changes: 83 additions & 0 deletions content/kdp/platform-users/rbac/_index.en.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
+++
title = "RBAC"
weight = 2
+++

# RBAC in KDP

Authorization (authZ) in KDP closely resembles
[Kubernetes RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) since KDP uses kcp
as its API control plane. Besides the "standard" RBAC of Kubernetes, kcp implements concepts specific
to its multi-workspace nature. See
[upstream documentation](https://docs.kcp.io/kcp/v0.22/concepts/authorization/) for them.

## Cross-workspace RBAC propagation

KDP implements controllers that allow propagation of `ClusterRoles` and `ClusterRoleBindings` to
children workspaces of the workspace that they are in. Be aware that existing resources with the same
names in the children workspaces will be overwritten.

To sync a `ClusterRole` or `ClusterRoleBinding`, annotate it with `kdp.k8c.io/sync-to-workspaces="*"`.
In the future, the feature might allow to only sync to specific child workspaces, but for now it only
supports syncing to all "downstream" workspace.

The default roles shipped with KDP are annotated like this to be provided in all workspaces.

## Auto-generate Service ClusterRoles

KDP comes with the `apibinding-clusterroles-controller`, which picks up `APIBindings` with the label
`rbac.kdp.k8c.io/create-default-clusterroles=true`. It generates two `ClusterRoles` called
`services:<API>:developer` and `services:<API>:viewer`, which give write and read permissions
respectively to all resources bound by the `APIBinding`.

Both `ClusterRoles` are aggregated to the "Developer" and "Member" roles (if present).

If the auto-generated rules are not desired because workspace owners want to assign more granular
permissions, the recommendation is to create `APIBindings` without the mentioned labels and instead
create `ClusterRole` objects in their workspaces. The `APIBinding` status can help in identifying
which resources are available (to add them to `ClusterRoles`):

```yaml
status:
[...]
boundResources:
- group: certs-demo.k8c.io # <- API group
resource: certificates # <- resource name
schema:
UID: 758377e9-4442-4706-bdd7-365991863931
identityHash: 7b6d5973370fb0e9104ac60b6bb5df81fc2b2320e77618a042c20281274d5a0a
name: vc517860e.certificates.certs-demo.k8c.io
storageVersions:
- v1alpha1
```
Creating such `ClusterRoles` is a manual process and follows the exact same paradigms as normal
Kubernetes RBAC. Manually created roles can still use the aggregation labels (documented below) so
that their manual roles are aggregated to the "Developer" and "Member" meta-roles.

## Well-Known Metadata

### ClusterRoles

#### Labels

| Label | Value | Description |
| ---------------------------------------- | ---------- | -------------------------- |
| `rbac.kdp.k8c.io/display` | `"true"` | Make the `ClusterRole` available for assignment to users in the KDP dashboard. |
| `rbac.kdp.k8c.io/aggregate-to-member` | `"true"` | Aggregate this `ClusterRole` into the "Member" role, which is used for basic membership in a workspace (i.e. mostly read-only permissions). |
| `rbac.kdp.k8c.io/aggregate-to-developer` | `"true"` | Aggregate this `ClusterRole` into the "Developer" role, which is assigned to active contributors (creating and deleting objects). |

#### Annotations

| Annotation | Value | Description |
| ------------------------------ | ---------- | -------------------------- |
| `rbac.kdp.k8c.io/display-name` | String | Display name in the KDP dashboard. The dashboard falls back to the `ClusterRole` object name if this is not set. |
| `rbac.kdp.k8c.io/description` | String | Description shown as help in the KDP dashboard for this `ClusterRole`. |

### APIBindings

#### Labels

| Label | Value | Description |
| --------------------------------------------- | -------- | -------------------------------------------------------------------------------------------- |
| `rbac.kdp.k8c.io/create-default-clusterroles` | `"true"` | Create default ClusterRoles (developer and viewer) for resources bound by this `APIBinding`. |
4 changes: 4 additions & 0 deletions content/kdp/service-providers/_index.en.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
+++
title = "Service Providers"
weight = 2
+++
Loading

0 comments on commit b591466

Please sign in to comment.