Skip to content

Latest commit

 

History

History
103 lines (73 loc) · 2.95 KB

README.md

File metadata and controls

103 lines (73 loc) · 2.95 KB

Kubernetes Custom Resource Conversion Webhook Example

This is an example of setting up Kubernetes CR (Custom Resource) conversion webhook in a GCE cluster.

This repo also contains benchmark testing to measure CR performance compared to native Kubernetes resources.

Set up a CRD with webhook conversion

TL;DR

Run setup.sh will call the following steps and set up the cluster for you.

Steps

  1. Create a GCE cluster (n1-standard-8) with CustomResourceWebhookConversion feature enabled
MASTER_SIZE=n1-standard-8 KUBE_FEATURE_GATES="ExperimentalCriticalPodAnnotation=true,CustomResourceWebhookConversion=true" KUBE_UP_AUTOMATIC_CLEANUP=true KUBE_APISERVER_REQUEST_TIMEOUT_SEC=600 ENABLE_APISERVER_INSECURE_PORT=true $GOPATH/src/k8s.io/kubernetes/cluster/kube-up.sh
  1. Create a secret containing a TLS key and certificate
hack/webhook-create-signed-cert.sh \
    --service webhook-service.default.svc \
    --secret webhook-tls-certs \
    --namespace default
  1. Create a CRD with the caBundle correctly configured from the TLS certificate
cat artifacts/crd-template.yaml | hack/webhook-patch-ca-bundle.sh --secret webhook-tls-certs > artifacts/crd-with-webhook.yaml
kubectl apply -f artifacts/crd-with-webhook.yaml
  1. Create a conversion webhook that uses the TLS certificate and key
kubectl apply -f artifacts/webhook-pod.yaml
kubectl apply -f artifacts/webhook-service.yaml
# Wait a few seconds for endpoints to be available for service
  1. Create custom resources at both supported versions for the CRD
kubectl apply -f artifacts/foov1.yaml
kubectl apply -f artifacts/foov2.yaml
  1. Read using both versions
kubectl get foo foov1
kubectl get foo.v1.stable.example.com foov1
  1. Create CRD without conversion
kubectl apply -f artifacts/bar-crd.yaml
  1. Create test namespaces
kubectl create ns empty
kubectl create ns large-data
kubectl create ns large-metadata

Benchmark testing

We suggest running the benchmarks on master VM to reduce the network noise.

# Push test binary and kubeconfig to GCE master VM
make push_test

# Move kubeconfig and binaries around
mkdir -p ~/.kube && mv /tmp/kubeconfig ~/.kube/config
sudo mv /tmp/conversion-webhook-example.test /run
sudo mv /tmp/conversion-webhook-example /run
sudo mv /tmp/run-tachymeter.sh /run

# Run benchmarks
/run/conversion-webhook-example.test -test.benchtime=100x -test.cpu 1 -test.bench=.
/run/conversion-webhook-example.test -test.benchtime=100x -test.cpu 1 -test.bench=CreateLatency
/run/conversion-webhook-example.test -test.benchtime=100x -test.cpu 1 -test.bench=CreateThroughput
/run/conversion-webhook-example.test -test.benchtime=100x -test.cpu 1 -test.bench=List

# Run tachymeter tests
/run/conversion-webhook-example --name="Benchmark_CreateLatency_CR"
/tmp/run-tachymeter.sh

References