-
-
Notifications
You must be signed in to change notification settings - Fork 37
Description
Is your feature request related to a problem? Please describe.
Currently the Kubernetes dashboard application binary is embedded within the workshop base image and the process run within the workshop container. This means there is one instance of the Kubernetes dashboard running per workshop session.
It is possible to run a single Kubernetes dashboard instance for a workshop environment and configure the workshop session to proxy to that instance, setting a Authorization bearer token header which the Kubernetes access token. This means that could have a single shared instance of the Kubernetes dashboard for use across all workshop sessions and reduce memory usage.
Note that this will not work if using a virtual cluster though. In this case still need to run one instance of the Kubernetes dashboard per workshop session. This is because one instance of the Kubernetes dashboard can only talk to one Kubernetes cluster.
Describe the solution you'd like
For normal workshop session working against host cluster, run one shared Kubernetes dashboard. If a virtual cluster is enabled for a session, then run one per workshop session, but run it as a side car container only accessible from the main workshop container. In both cases use the standard upstream Kubernetes dashboard image.
Describe alternatives you've considered
No response
Additional information
As proof of concept can use the following workshop definition.
Note though that following uses v3.0.0 alpha version. For now should use older v2.x version, which rather than use two containers, uses only one.
apiVersion: training.educates.dev/v1beta1
kind: Workshop
metadata:
name: "lab-kubernetes-dashboard"
spec:
title: "Kubernetes Dashboard"
description: "Testing shared Kubernetes dashboard."
publish:
image: "$(image_repository)/lab-kubernetes-dashboard-files:$(workshop_version)"
workshop:
files:
- image:
url: "$(image_repository)/lab-kubernetes-dashboard-files:$(workshop_version)"
includePaths:
- /workshop/**
- /exercises/**
- /README.md
session:
namespaces:
budget: medium
applications:
terminal:
enabled: true
layout: split
editor:
enabled: true
console:
enabled: true
docker:
enabled: false
registry:
enabled: false
vcluster:
enabled: false
ingresses:
- name: xconsole
host: kubernetes-dashboard-api
port: 9000
path: /api
headers:
- name: Authorization
value: "Bearer $(kubernetes_token)"
- name: xconsole
host: kubernetes-dashboard-web
port: 8000
dashboards:
- name: XConsole
url: "$(ingress_protocol)://xconsole-$(session_name).$(ingress_domain)/#/overview/?namespace=$(session_namespace)"
environment:
objects:
- apiVersion: v1
kind: ServiceAccount
metadata:
name: kubernetes-dashboard
namespace: $(workshop_namespace)
- apiVersion: v1
kind: Secret
metadata:
name: kubernetes-dashboard-csrf
namespace: $(workshop_namespace)
type: Opaque
data:
csrf: ""
- apiVersion: v1
kind: Secret
metadata:
name: kubernetes-dashboard-key-holder
namespace: $(workshop_namespace)
type: Opaque
- apiVersion: v1
kind: ConfigMap
metadata:
name: kubernetes-dashboard-settings
namespace: $(workshop_namespace)
- kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard
namespace: $(workshop_namespace)
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [ "" ]
resources: [ "secrets" ]
resourceNames: [ "kubernetes-dashboard-key-holder", "kubernetes-dashboard-csrf" ]
verbs: [ "get", "update", "delete" ]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [ "" ]
resources: [ "configmaps" ]
resourceNames: [ "kubernetes-dashboard-settings" ]
verbs: [ "get", "update" ]
- apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard
namespace: $(workshop_namespace)
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: $(workshop_namespace)
- apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard-web
namespace: $(workshop_namespace)
labels:
app.kubernetes.io/name: kubernetes-dashboard-web
app.kubernetes.io/part-of: kubernetes-dashboard
app.kubernetes.io/component: web
spec:
ports:
- name: web
port: 8000
selector:
app.kubernetes.io/name: kubernetes-dashboard-web
app.kubernetes.io/part-of: kubernetes-dashboard
- apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard-api
namespace: $(workshop_namespace)
labels:
app.kubernetes.io/name: kubernetes-dashboard-api
app.kubernetes.io/part-of: kubernetes-dashboard
app.kubernetes.io/component: api
spec:
ports:
- name: api
port: 9000
selector:
app.kubernetes.io/name: kubernetes-dashboard-api
app.kubernetes.io/part-of: kubernetes-dashboard
- apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes-dashboard-web
namespace: $(workshop_namespace)
labels:
app.kubernetes.io/name: kubernetes-dashboard-web
app.kubernetes.io/part-of: kubernetes-dashboard
app.kubernetes.io/component: web
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/name: kubernetes-dashboard-web
app.kubernetes.io/part-of: kubernetes-dashboard
template:
metadata:
labels:
app.kubernetes.io/name: kubernetes-dashboard-web
app.kubernetes.io/part-of: kubernetes-dashboard
app.kubernetes.io/component: web
spec:
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
containers:
- name: kubernetes-dashboard-web
image: docker.io/kubernetesui/dashboard-web:v1.0.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000
name: web
protocol: TCP
volumeMounts:
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
capabilities:
drop: ["ALL"]
volumes:
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
- apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes-dashboard-api
namespace: $(workshop_namespace)
labels:
app.kubernetes.io/name: kubernetes-dashboard-api
app.kubernetes.io/part-of: kubernetes-dashboard
app.kubernetes.io/component: api
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/name: kubernetes-dashboard-api
app.kubernetes.io/part-of: kubernetes-dashboard
template:
metadata:
labels:
app.kubernetes.io/name: kubernetes-dashboard-api
app.kubernetes.io/part-of: kubernetes-dashboard
app.kubernetes.io/component: api
spec:
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
containers:
- name: kubernetes-dashboard-api
image: docker.io/kubernetesui/dashboard-api:v1.0.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9000
name: api
protocol: TCP
args:
- --namespace=$(workshop_namespace)
- --enable-insecure-login
volumeMounts:
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
capabilities:
drop: ["ALL"]
volumes:
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
In making this change, should drop Octant support.
Before commit, need to validate how will get this to work for virtual cluster scenario.
Metadata
Metadata
Assignees
Labels
Type
Projects
Status