Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🚀 Feature: more cluster details in the catalog #15088

Closed
2 tasks done
jamieklassen opened this issue Dec 7, 2022 · 0 comments · Fixed by #15812
Closed
2 tasks done

🚀 Feature: more cluster details in the catalog #15088

jamieklassen opened this issue Dec 7, 2022 · 0 comments · Fixed by #15812
Labels
area:catalog Related to the Catalog Project Area area:kubernetes Related to the Kubernetes Project Area - not deploying Backstage with k8s. enhancement New feature or request

Comments

@jamieklassen
Copy link
Member

jamieklassen commented Dec 7, 2022

🔖 Feature description

The catalog clusterLocator should support more ClusterDetails fields to make it a bit closer to the config clusterLocator in terms of features. As it stands, the ClusterDetails interface has many fields:

/**
*
* @alpha
*/
export interface ClusterDetails {
/**
* Specifies the name of the Kubernetes cluster.
*/
name: string;
url: string;
authProvider: string;
serviceAccountToken?: string | undefined;
/**
* oidc provider used to get id tokens to authenticate against kubernetes
*/
oidcTokenProvider?: string | undefined;
skipTLSVerify?: boolean;
/**
* Whether to skip the lookup to the metrics server to retrieve pod resource usage.
* It is not guaranteed that the Kubernetes distro has the metrics server installed.
*/
skipMetricsLookup?: boolean;
caData?: string | undefined;
/**
* Specifies the link to the Kubernetes dashboard managing this cluster.
* @remarks
* Note that you should specify the app used for the dashboard
* using the dashboardApp property, in order to properly format
* links to kubernetes resources, otherwise it will assume that you're running the standard one.
* @see dashboardApp
* @see dashboardParameters
*/
dashboardUrl?: string;
/**
* Specifies the app that provides the Kubernetes dashboard.
* This will be used for formatting links to kubernetes objects inside the dashboard.
* @remarks
* The existing apps are: standard, rancher, openshift, gke, aks, eks
* Note that it will default to the regular dashboard provided by the Kubernetes project (standard).
* Note that you can add your own formatter by registering it to the clusterLinksFormatters dictionary.
* @defaultValue standard
* @see dashboardUrl
* @example
* ```ts
* import { clusterLinksFormatters } from '@backstage/plugin-kubernetes';
* clusterLinksFormatters.myDashboard = (options) => ...;
* ```
*/
dashboardApp?: string;
/**
* Specifies specific parameters used by some dashboard URL formatters.
* This is used by the GKE formatter which requires the project, region and cluster name.
* @see dashboardApp
*/
dashboardParameters?: JsonObject;
/**
* Specifies which custom resources to look for when returning an entity's
* Kubernetes resources.
*/
customResources?: CustomResourceMatcher[];
}

but the catalog clusterLocator only retrieves and surfaces four of them (name, url, caData and authProvider):

const clusterDetails: ClusterDetails = {
name: entity.metadata.name,
url: entity.metadata.annotations![ANNOTATION_KUBERNETES_API_SERVER]!,
caData:
entity.metadata.annotations![ANNOTATION_KUBERNETES_API_SERVER_CA]!,
authProvider:
entity.metadata.annotations![ANNOTATION_KUBERNETES_AUTH_PROVIDER]!,
};

🎤 Context

What works today

Here's my basic setup:

OIDC Authority

I create an azure AD app registration with

$ export CLIENT_ID=$(az ad app create --display-name my-backstage --web-redirect-uris http://localhost:7007/api/auth/microsoft/handler/frame http://localhost:8000 | jq -r .appId)
$ export CLIENT_SECRET=$(az ad app credential reset --append --id $CLIENT_ID | jq -r .password)
OIDC-enabled K8s Cluster

I create a kind cluster with

$ kind create cluster --config - <<EOF
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
kubeadmConfigPatches:
- |-
  kind: ClusterConfiguration
  apiServer:
    extraArgs:
      oidc-client-id: CLIENT-ID
      oidc-issuer-url: https://login.microsoftonline.com/TENANT-ID/v2.0
      oidc-username-claim: email
EOF

where CLIENT-ID is the ID for the app registration I created previously, and TENANT-ID is the ID of my azure AD tenant.

Then I set up some RBAC on that cluster

$ kubectl create clusterrolebinding me-admin --user EMAIL-ADDRESS --clusterrole cluster-admin

Where EMAIL-ADDRESS is the email of my Azure account. Then, just to verify, I set up my kubeconfig to use kubelogin by running

$ kubectl config set-credentials kind-oidc \
  --exec-api-version=client.authentication.k8s.io/v1beta1 \
  --exec-command=kubectl \
  --exec-arg oidc-login \
  --exec-arg get-token \
  --exec-arg --oidc-issuer-url=https://login.microsoftonline.com/TENANT-ID/v2.0 \
  --exec-arg --oidc-client-id=$CLIENT_ID \
  --exec-arg --oidc-client-secret=$CLIENT_SECRET \
  --exec-arg --oidc-extra-scope=email
$ kubectl config set-context kind-kind --user kind-oidc

So that running kubectl get ns causes my browser to pop up an azure AD login before succesfully showing my namespaces.

Backstage

My app-config looks like

auth:
  environment: development
  providers:
    microsoft:
      development:
        clientId: CLIENT-ID
        clientSecret: CLIENT-SECRET
        tenantId: TENANT-ID
kubernetes:
  serviceLocatorMethod:
    type: 'multiTenant'
  clusterLocatorMethods:
    - type: config
      clusters:
      - name: kind
        url: https://127.0.0.1:PORT
        caData: KIND-CA-DATA
        authProvider: oidc
        oidcTokenProvider: microsoft
        skipMetricsLookup: true
catalog:
  locations:
    - type: file
      target: ../../kubernetes.yaml
      rules:
        - allow: [User, Component, Resource]

where CLIENT-ID, CLIENT-SECRET, and TENANT-ID are as discussed above, and PORT is the port on which my kind apiserver is running, and KIND-CA-DATA is the base64-encoded CA bundle for the kind apiserver (pulled from my kubeconfig).

where kubernetes.yaml itself contains

---
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
  name: kube-dns
  annotations:
    'backstage.io/kubernetes-label-selector': 'k8s-app=kube-dns'
spec:
  type: service
  lifecycle: stable
  owner: user:guest
---
apiVersion: backstage.io/v1alpha1
kind: User
metadata:
  name: me
  annotations:
    'microsoft.com/email': EMAIL-ADDRESS
spec:
  memberOf: []

where EMAIL-ADDRESS is as discussed above. Following all these steps, I can run yarn dev and open http://localhost:3000/catalog/default/component/kube-dns/kubernetes, get prompted to sign in to Microsoft, and see the coredns pods in my kind cluster.

What I want

Instead of specifying all my cluster details in the app-config, I'd like some way of specifying them in the catalog, so that my app-config would become

auth:
  environment: development
  providers:
    microsoft:
      development:
        clientId: CLIENT-ID
        clientSecret: CLIENT-SECRET
        tenantId: TENANT-ID
kubernetes:
  serviceLocatorMethod:
    type: 'multiTenant'
  clusterLocatorMethods:
    - type: catalog
catalog:
  locations:
    - type: file
      target: ../../kubernetes.yaml
      rules:
        - allow: [User, Component, Resource]

and kubernetes.yaml could looke something like

---
apiVersion: backstage.io/v1alpha1
kind: Resource
metadata:
  name: kind
  annotations:
    'kubernetes.io/api-server': https://127.0.0.1:PORT
    'kubernetes.io/api-server-certificate-authority': KIND-CA-DATA
    'kubernetes.io/auth-provider': oidc
    'kubernetes.io/oidc-token-provider': microsoft
    'kubernetes.io/skip-metrics-lookup': 'true'
spec:
  type: kubernetes-cluster
  owner: user:guest
---
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
  name: kube-dns
  annotations:
    'backstage.io/kubernetes-label-selector': 'k8s-app=kube-dns'
spec:
  type: service
  lifecycle: stable
  owner: user:guest
---
apiVersion: backstage.io/v1alpha1
kind: User
metadata:
  name: me
  annotations:
    'microsoft.com/email': EMAIL-ADDRESS
spec:
  memberOf: []

and still get the same experience. As it happens, if you try to run these config files with the code on master, the page will simply contain the error panel with the message

Errors: authProvider "oidc" has no KubernetesAuthProvider defined for it

✌️ Possible Implementation

What comes to mind is adding new annotations for other Kubernetes cluster details to @backstage/catalog-model, and then having the CatalogClusterLocator read them in its getClusters method and surface them in the clusterDetails appropriately.

👀 Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

🏢 Have you read the Code of Conduct?

Are you willing to submit PR?

Yes I am willing to submit a PR!

@jamieklassen jamieklassen added the enhancement New feature or request label Dec 7, 2022
@github-actions github-actions bot added the area:catalog Related to the Catalog Project Area label Dec 7, 2022
@Rugvip Rugvip added the area:kubernetes Related to the Kubernetes Project Area - not deploying Backstage with k8s. label Dec 8, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:catalog Related to the Catalog Project Area area:kubernetes Related to the Kubernetes Project Area - not deploying Backstage with k8s. enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants