Skip to content

Add a new Kubernetes version support

Yuriy Losev edited this page Jan 31, 2024 · 5 revisions

How to add a new version of the kubernetes to the Deckhouse

(Add new version and delete old version simultaneously, e.g. add 1.25 and remove 1.20)


Phase 1 (Github CI)

Correct and merge github actions with a separate PR or there will be no e2e tests (everything is in the .github directory). Example

  • Add new version to the constant.js
  • Add new version to the e2e.multi.yml
  • Generate workflow with the render-workflows.sh script
  • Create and merge a PR

Phase 2 (Deckhouse)

  1. Add new version to candi/version_map.yaml

    • Find and add the current version of the kubernetes

    • Update provider versions for the ccm

    • Update versions of the csi

      • openstack - matches ccm version, just copy the tag
      • provisioner
        • export VERSION=XXX && crictl pull registry.k8s.io/sig-storage/csi-provisioner:${VERSION} && crictl inspecti registry.k8s.io/sig-storage/csi-provisioner:${VERSION} | jq -r .status.repoDigests[0] | cut -f2 -d"@" to get sha256 (set $VERSION variable from a release)
      • attacher
        • export VERSION=XXX && crictl pull registry.k8s.io/sig-storage/csi-attacher:${VERSION} && crictl inspecti registry.k8s.io/sig-storage/csi-attacher:${VERSION} | jq -r .status.repoDigests[0] | cut -f2 -d"@" to get sha256
      • resizer
        • export VERSION=XXX && crictl pull registry.k8s.io/sig-storage/csi-resizer:${VERSION} && crictl inspecti registry.k8s.io/sig-storage/csi-resizer:${VERSION} | jq -r .status.repoDigests[0] | cut -f2 -d"@"
      • registrar
        • export VERSION=XXX && crictl pull registry.k8s.io/sig-storage/csi-node-driver-registrar:${VERSION} && crictl inspecti registry.k8s.io/sig-storage/csi-node-driver-registrar:${VERSION} | jq -r .status.repoDigests[0] | cut -f2 -d"@"
      • snapshotter
        • export VERSION=XXX && crictl pull registry.k8s.io/sig-storage/csi-snapshotter:${VERSION} && crictl inspecti registry.k8s.io/sig-storage/csi-snapshotter:${VERSION} | jq -r .status.repoDigests[0] | cut -f2 -d"@"
      • livenessprobe
        • export VERSION=XXX && crictl pull registry.k8s.io/sig-storage/livenessprobe:${VERSION} && crictl inspecti registry.k8s.io/sig-storage/livenessprobe:${VERSION} | jq -r .status.repoDigests[0] | cut -f2 -d"@"
  2. Add a new version and anchors to the ee/candi/version_map.yaml

  3. Remove a deprecated version from candi/version_map.yaml and ee/candi/version_map.yaml (Don't forget to rearrange all yaml anchors)

  4. Images in the 040-control-plane-manager module.

    • Check that golang images are up to date in the werf.inc.yaml
      • Pour the desired golang images into our registry (run tools/regcopy/main.go)
      • Write in the candi/image_versions.yaml file.
    • Add patches for the new version (at the same time verify them)
      • Images which have patches:
        • control-plane-manager
        • kube-apiserver
        • kube-controller-manager
    • Remove patches for the deleted version
      • Images:
        • control-plane-manager
        • kube-apiserver
        • kube-controller-manager
    • Fix tests, which use a deprecated version
      • Especially test effective_kubernetes_version - (simple way - iterate all minor versions)
  5. Change the list of current versions in the modules/040-control-plane-manager/images/control-plane-manager/control-plane-manager#92 and check experimental patches (up to version 1.22) on the line 322

  6. Check conditions in modules/040-control-plane-manager/kubeadm/config.yaml.tpl

  7. Update the versions in

    • modules/040-control-plane-manager/openapi/values.yaml#18 and there config-values.yaml
    • candi/openapi/node_group.yaml
    • candi/openapi/cluster_configuration.yaml
  8. Check conditions in the candi/bashible/common-steps/node-group/064_configure_kubelet.sh.tpl file

  9. Prescribe the new minimal version in release.yaml, section requirements.

  10. Remove a deprecated version from the .github directory files:

    • e2e.multi.yml
    • constant.js
  11. Clean templates from outdated k8s, just search (Ctrl+F and go on)

  12. Clean up deprecated hooks, also with search (Ctrl+F and go on)

  13. Add deprecated API for the new version in modules/340-monitoring-kubernetes/hooks/helm.go

    • Clean up deprecated stuff (search rep)
  14. Update cloud-providers - images/**/werf.yaml and images/**/patches

  15. Update the cri-tools

    • files in 007-registrypackages/images/kubeadm-$distrib
  16. Fix the D8KubernetesVersionIsDeprecated alert and raise the deprecated version there

  17. The go_lib/dependency/k8s/drain is taken from the upstream. You should check if it is up to date (better to raise it with client-go lib together, not with new version of kubernetes)


Phase 3 (CNCF certification)

  1. Get the repo.

  2. Go to the folder with the name corresponding to the version of Kubernetes (e.g., v1.27), and create the deckhouse folder.

  3. Get the PRODUCT.yaml and README.md files from the previous version on the test (e.g., v1.26) and make changes. Set a future version of Deckhouse in the version parameter of the PRODUCT.yaml file (the example for the v1.26).

  4. Follow the instructions in the README.md file to make tests.

    TLDR:

    • Get and unpack the latest version of the framework. E.g.:
      wget https://github.com/vmware-tanzu/sonobuoy/releases/download/v0.56.15/sonobuoy_0.56.17_linux_amd64.tar.gz && \
      tar xzf sonobuoy_0.56.17_linux_amd64.tar.gz
    • Run ./sonobuoy run --mode=certified-conformance.
    • Check ./sonobuoy status, after ~30 min. There should not be FAILED results. Otherwise, get the logs using the ./sonobuoy logs command and see what's wrong.
  5. Send the results to CNCF.

    • Get results:
      ./sonobuoy retrieve . ; mkdir ./results; tar xzf *.tar.gz -C ./results
    • Make sure that everything is PASSed and there are no FAILs in the e2e.log и junit_01.xml files.
    • Put the e2e.log и junit_01.xml files in the same folder (see 2.).
    • Create PR in the repo.
  6. Don't forget to update Deckhouse repo and site after getting certification.