Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PodDisruptionBudgets API version policy/v1beta1 removed in Kubernetes 1.25 #1104

Closed
jameshearttech opened this issue Oct 17, 2022 · 10 comments
Closed
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@jameshearttech
Copy link

jameshearttech commented Oct 17, 2022

What would you like to be added:
PodDisruptionBudgets policy/v1

Why is this needed:
PodDisruptionBudgets API version policy/v1beta1 removed in Kubernetes 1.25
https://kubernetes.io/docs/reference/using-api/deprecation-guide/#poddisruptionbudget-v125

Example:

k8sadmin@dev-master0:~$ kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.25.2
Kustomize Version: v4.5.7
Server Version: v1.25.3
k8sadmin@dev-master0:~$ kubectl api-resources | sed -e '1p' -e '/poddisruptionbudgets/!d'
NAME                              SHORTNAMES                                      APIVERSION                             NAMESPACED   KIND
poddisruptionbudgets              pdb                                             policy/v1                              true         PodDisruptionBudget
k8sadmin@dev-master0:~$ cat ./manifests/high-availability.yaml | grep -B 1 PodDisruptionBudget
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
k8sadmin@dev-master0:~$ kubectl apply -f ./manifests/high-availability.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
error: resource mapping not found for name: "metrics-server" namespace: "kube-system" from "./manifests/high-availability.yaml": no matches for kind "PodDisruptionBudget" in version "policy/v1beta1"
ensure CRDs are installed first
k8sadmin@dev-master0:~$ kubectl get pod -A | sed -e '1p' -e '/metrics-server/!d'
NAMESPACE              NAME                                         READY   STATUS    RESTARTS       AGE
kube-system            metrics-server-859bdcd57d-nfnqn              0/1     Running   0              35s
kube-system            metrics-server-859bdcd57d-rcdtk              0/1     Running   0              35s
k8sadmin@dev-master0:~$ kubectl describe pod metrics-server-859bdcd57d-nfnqn -n kube-system
Name:                 metrics-server-859bdcd57d-nfnqn
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      metrics-server
Node:                 dev-worker2/10.69.2.35
Start Time:           Mon, 17 Oct 2022 05:43:10 +0000
Labels:               k8s-app=metrics-server
                      pod-template-hash=859bdcd57d
Annotations:          cni.projectcalico.org/containerID: 9c41fbfe71a2562f7d84c27f482c2e584ab0dab4d6fe68649f53aed40d8eab6f
                      cni.projectcalico.org/podIP: 192.168.184.74/32
                      cni.projectcalico.org/podIPs: 192.168.184.74/32
Status:               Running
IP:                   192.168.184.74
IPs:
  IP:           192.168.184.74
Controlled By:  ReplicaSet/metrics-server-859bdcd57d
Containers:
  metrics-server:
    Container ID:  containerd://2cae899a2b41e8ab45768cc41e64549adbc5525d7586fcb2713179da6a2b10d1
    Image:         k8s.gcr.io/metrics-server/metrics-server:v0.6.1
    Image ID:      k8s.gcr.io/metrics-server/metrics-server@sha256:5ddc6458eb95f5c70bd13fdab90cbd7d6ad1066e5b528ad1dcb28b76c5fb2f00
    Port:          4443/TCP
    Host Port:     0/TCP
    Args:
      --cert-dir=/tmp
      --secure-port=4443
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
      --kubelet-use-node-status-port
      --metric-resolution=15s
    State:          Running
      Started:      Mon, 17 Oct 2022 05:43:11 +0000
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:        100m
      memory:     200Mi
    Liveness:     http-get https://:https/livez delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:    http-get https://:https/readyz delay=20s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /tmp from tmp-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k5cqj (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  tmp-dir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  kube-api-access-k5cqj:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age               From               Message
  ----     ------     ----              ----               -------
  Normal   Scheduled  91s               default-scheduler  Successfully assigned kube-system/metrics-server-859bdcd57d-nfnqn to dev-worker2
  Normal   Pulled     90s               kubelet            Container image "k8s.gcr.io/metrics-server/metrics-server:v0.6.1" already present on machine
  Normal   Created    90s               kubelet            Created container metrics-server
  Normal   Started    90s               kubelet            Started container metrics-server
  Warning  Unhealthy  1s (x8 over 61s)  kubelet            Readiness probe failed: HTTP probe failed with statuscode: 500

/kind feature

@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Oct 17, 2022
@yangjunmyfm192085
Copy link
Contributor

In the main branch, the version of PodDisruptionBudget has been changed to policy/v1.
@serathius Should we release v0.7?
But what is strange is that the current e2e is tested by covering the kubernetes 1.25 version in release0.6. But no problem was found.
I'll look into it

@jcpunk
Copy link
Contributor

jcpunk commented Oct 20, 2022

I'm showing:

Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget

@jameshearttech
Copy link
Author

Right. That's the cause of the error in the example.

@stevehipwell
Copy link
Contributor

This has been part of the Helm chart since it was first released from this repo as we have the ability to support both. You will need to swap the manifests over to policy/v1 to support K8s v1.25 which will make the lowest K8s version supported (by the manifests) v1.21.

@jcpunk
Copy link
Contributor

jcpunk commented Oct 24, 2022

Is there a way I can help get a version with the right policy tagged and made available via helm?

@stevehipwell
Copy link
Contributor

@jcpunk support for the correct PodDisruptionBudget API has been in the Helm chart since v3.5.0, which was the first release from this repo.

{{- define "metrics-server.pdb.apiVersion" -}}

@jcpunk
Copy link
Contributor

jcpunk commented Oct 24, 2022

Interesting, mine doesn't seem to be running. Thought I'd tracked it down to this issue... guess I'll need to do more research....

@jameshearttech
Copy link
Author

jameshearttech commented Oct 24, 2022

After receiving Steve's response, I took another look at this. I read about how to use kustomize. I applied release-ha and metrics-server appears to be working as expected now. Thanks so much.

@stevehipwell
Copy link
Contributor

@jcpunk the following steps should help you figure out what's wrong, I'm working on the assumption that you're running a recent version of Helm v3.

Run helm repo list and you should see the following entry.

metrics-server https://kubernetes-sigs.github.io/metrics-server/

Run helm repo update && helm search repo metrics-server and you should see the following entry.

metrics-server/metrics-server 3.8.2 0.6.1 Metrics Server is a scalable, efficient source ...

Run helm --namespace kube-system template --kube-version 1.20.0 metrics-server metrics-server/metrics-server --version 3.8.2 --set podDisruptionBudget.enabled=true --set podDisruptionBudget.minAvailable=1 and you should see the following.

apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: metrics-server
labels:
helm.sh/chart: metrics-server-3.8.2
app.kubernetes.io/name: metrics-server
app.kubernetes.io/instance: metrics-server
app.kubernetes.io/version: "0.6.1"
app.kubernetes.io/managed-by: Helm
spec:
minAvailable: 1
selector:
matchLabels:
app.kubernetes.io/name: metrics-server
app.kubernetes.io/instance: metrics-server

Run helm --namespace kube-system template --kube-version 1.21.0 metrics-server metrics-server/metrics-server --version 3.8.2 --set podDisruptionBudget.enabled=true --set podDisruptionBudget.minAvailable=1 and you should see the following.

apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: metrics-server
labels:
helm.sh/chart: metrics-server-3.8.2
app.kubernetes.io/name: metrics-server
app.kubernetes.io/instance: metrics-server
app.kubernetes.io/version: "0.6.1"
app.kubernetes.io/managed-by: Helm
spec:
minAvailable: 1
selector:
matchLabels:
app.kubernetes.io/name: metrics-server
app.kubernetes.io/instance: metrics-server

@jameshearttech
Copy link
Author

jameshearttech commented Oct 25, 2022 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants