Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated description of policy #955

Merged
merged 2 commits into from
Mar 24, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 7 additions & 3 deletions other/scale-deployment-zero/artifacthub-pkg.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,9 @@ version: 1.0.0
displayName: Scale Deployment to Zero
createdAt: "2023-04-10T20:30:07.000Z"
description: >-
If a Deployment's Pods are seen crashing multiple times it usually indicates there is an issue that must be manually resolved. Removing the failing Pods and marking the Deployment is often a useful troubleshooting step. This policy watches existing Pods and if any are observed to have restarted more than once, indicating a potential crashloop, Kyverno scales its parent deployment to zero and writes an annotation signaling to an SRE team that troubleshooting is needed. It may be necessary to grant additional privileges to the Kyverno ServiceAccount, via one of the existing ClusterRoleBindings or a new one, so it can modify Deployments.
If a Deployment's Pods are seen crashing multiple times it usually indicates there is an issue that must be manually resolved. Removing the failing Pods and marking the Deployment is often a useful troubleshooting step. This policy watches existing Pods and if any are observed to have restarted more than once, indicating a potential crashloop, Kyverno scales its parent deployment to zero and writes an annotation signaling to an SRE team that troubleshooting is needed. It may be necessary to grant additional privileges to the Kyverno ServiceAccount, via one of the existing ClusterRoleBindings or a new one, so it can modify Deployments.This policy scales down deployments with frequently restarting pods by monitoring `Pod.status` for `restartCount`
updates, which are performed by the kubelet. No `resourceFilter` modifications are needed if matching on `Pod`and `Pod.status`.
Note: For this policy to work, you must modify Kyverno's ConfigMap to remove or change the line `excludeGroups: system:nodes` since version 1.10.
install: |-
```shell
kubectl apply -f https://raw.githubusercontent.com/kyverno/policies/main/other/scale-deployment-zero/scale-deployment-zero.yaml
Expand All @@ -12,11 +14,13 @@ keywords:
- kyverno
- other
readme: |
If a Deployment's Pods are seen crashing multiple times it usually indicates there is an issue that must be manually resolved. Removing the failing Pods and marking the Deployment is often a useful troubleshooting step. This policy watches existing Pods and if any are observed to have restarted more than once, indicating a potential crashloop, Kyverno scales its parent deployment to zero and writes an annotation signaling to an SRE team that troubleshooting is needed. It may be necessary to grant additional privileges to the Kyverno ServiceAccount, via one of the existing ClusterRoleBindings or a new one, so it can modify Deployments.
If a Deployment's Pods are seen crashing multiple times it usually indicates there is an issue that must be manually resolved. Removing the failing Pods and marking the Deployment is often a useful troubleshooting step. This policy watches existing Pods and if any are observed to have restarted more than once, indicating a potential crashloop, Kyverno scales its parent deployment to zero and writes an annotation signaling to an SRE team that troubleshooting is needed. It may be necessary to grant additional privileges to the Kyverno ServiceAccount, via one of the existing ClusterRoleBindings or a new one, so it can modify Deployments. This policy scales down deployments with frequently restarting pods by monitoring `Pod.status` for `restartCount`
updates, which are performed by the kubelet. No `resourceFilter` modifications are needed if matching on `Pod` and `Pod.status`.
Note: For this policy to work, you must modify Kyverno's ConfigMap to remove or change the line `excludeGroups: system:nodes` since version 1.10.

Refer to the documentation for more details on Kyverno annotations: https://artifacthub.io/docs/topics/annotations/kyverno/
annotations:
kyverno/category: "Other"
kyverno/kubernetesVersion: "1.23"
kyverno/subject: "Deployment"
digest: 29025a98c509c07e1cc2d00b311d828382c14ce218383ba8b3da7269a7253343
digest: 4f6fff86d18795edfb1ba656ea055a05a31bc711787aec1e87c84c11f27503e2
5 changes: 5 additions & 0 deletions other/scale-deployment-zero/scale-deployment-zero.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,11 @@ metadata:
and writes an annotation signaling to an SRE team that troubleshooting is needed.
It may be necessary to grant additional privileges to the Kyverno ServiceAccount,
via one of the existing ClusterRoleBindings or a new one, so it can modify Deployments.
This policy scales down deployments with frequently restarting pods by monitoring `Pod.status`
for `restartCount`updates, which are performed by the kubelet. No `resourceFilter` modifications
are needed if matching on `Pod`and `Pod.status`.
Note: For this policy to work, you must modify Kyverno's ConfigMap to remove or change the line
`excludeGroups: system:nodes` since version 1.10.
spec:
rules:
- name: annotate-deployment-rule
Expand Down
Loading