-
Notifications
You must be signed in to change notification settings - Fork 784
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
adding pod anti-affinity to Kyverno #1985
Conversation
@realshuting after making changes to |
@@ -28,7 +28,16 @@ spec: | |||
securityContext: {{ tpl (toYaml .) $ | nindent 8 }} | |||
{{- end }} | |||
{{- with .Values.affinity }} | |||
affinity: {{ tpl (toYaml .) $ | nindent 8 }} | |||
affinity: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressing this review - I couldn't find how to add pod anti affinity to helm charts so I'm not sure if what I've done is correct. @realshuting could you please point me to the correct way of doing this? Thanks.
Initially I saw how elestic search had implemented this their charts so I did the same but I'm confused now if the change is required in the deployment.yaml or values.yaml file.
@RinkiyaKeDad The Helm failure has this message
If I had to guess the label I'll try to run the Helm tests on my local system to see if I can reproduce and maybe probe things a bit to see what's broken. |
I found the problem. During the Helm testing, a test pod is launched to execute some Here is one possible solution I verified on my local system: diff --git a/charts/kyverno/templates/_helpers.tpl b/charts/kyverno/templates/_helpers.tpl
index 05934cdb..4be734b2 100644
--- a/charts/kyverno/templates/_helpers.tpl
+++ b/charts/kyverno/templates/_helpers.tpl
@@ -42,6 +42,17 @@ helm.sh/chart: {{ template "kyverno.chart" . }}
{{- end }}
{{- end -}}
+{{/* Helm required labels */}}
+{{- define "kyverno.test-labels" -}}
+app.kubernetes.io/component: kyverno
+app.kubernetes.io/instance: {{ .Release.Name }}
+app.kubernetes.io/managed-by: {{ .Release.Service }}
+app.kubernetes.io/name: {{ template "kyverno.name" . }}-test
+app.kubernetes.io/part-of: {{ template "kyverno.name" . }}
+app.kubernetes.io/version: "{{ .Chart.Version }}"
+helm.sh/chart: {{ template "kyverno.chart" . }}
+{{- end -}}
+
{{/* matchLabels */}}
{{- define "kyverno.matchLabels" -}}
app.kubernetes.io/name: {{ template "kyverno.name" . }}
diff --git a/charts/kyverno/templates/tests/test.yaml b/charts/kyverno/templates/tests/test.yaml
index f176cdd4..3f548657 100644
--- a/charts/kyverno/templates/tests/test.yaml
+++ b/charts/kyverno/templates/tests/test.yaml
@@ -3,7 +3,7 @@ kind: Pod
metadata:
name: "{{ template "kyverno.fullname" . }}-test"
labels:
- {{- include "kyverno.labels" . | nindent 4 }}
+ {{- include "kyverno.test-labels" . | nindent 4 }}
annotations:
"helm.sh/hook": test
spec:
This pretty much just duplicates main label set but changes |
It worked! Thank you so much for helping @treydock! :) |
One issue I see with forcing affinity is if someone says replicaCount=2 and they have like a dev 2 node cluster, it becomes impossible to doing a rolling restart during updates. I ran into this on my dev cluster when doing replica 2 deployment. I solved it with this:
So might be good to to expose |
dependent on - #2006 |
@treydock - can you please check the PR and let me know if anything else is required. |
Looks fine to me, I think only thing I'd recommend changing is make it possible with Helm chart to turn off the anti affinity so that in simpler 1 node Kubernetes clusters it's possible to test HA and not need a second node. Maybe like Helm value of |
Another option would be make the Helm values be like this: antiAffinity:
enable: true
topologyKey: "kubernetes.io/hostname" |
@treydock - I have made the changes. Please check and let me know if I need to do any further changes. |
Looks good to me |
Signed-off-by: RinkiyaKeDad <arshsharma461@gmail.com>
Signed-off-by: RinkiyaKeDad <arshsharma461@gmail.com>
Signed-off-by: RinkiyaKeDad <arshsharma461@gmail.com>
Signed-off-by: RinkiyaKeDad <arshsharma461@gmail.com>
Signed-off-by: RinkiyaKeDad <arshsharma461@gmail.com>
Signed-off-by: RinkiyaKeDad <arshsharma461@gmail.com>
Signed-off-by: NoSkillGirl <singhpooja240393@gmail.com>
Signed-off-by: NoSkillGirl <singhpooja240393@gmail.com>
Signed-off-by: NoSkillGirl <singhpooja240393@gmail.com>
@RinkiyaKeDad @NoSkillGirl - I'm not able to update Kyverno running in a single node cluster when installed with direct manifest. Should we remove the kyverno/definitions/install.yaml Lines 6713 to 6722 in adb7858
|
After removing does it works as expected? What about when we run multiple kyverno pods? If we remove the above |
I've run into similar issues on just regular deployments when pod anti-affinity is enabled as well as rolling update strategy. So I think the issue is pod anti affinity combined with this might cause some issues with single node deployment and updates because one pod is terminating and won't let another start on that same node.
|
I have not worked on the PR. I don't have much understanding about the above changes. I connect with @RinkiyaKeDad - he says he have lost so much context and don't remember much. @treydock @realshuting - should I revert this PR? |
Thanks @NoSkillGirl! No, we should update the anti-affinity to the soft limit, see #1982 (comment). @anushkamittal20 is working on that (let's create a separate issue to track). |
Signed-off-by: RinkiyaKeDad arshsharma461@gmail.com
Related issue
Fixes #1966
What type of PR is this
Proposed Changes
Added pod anti-affinity to kyverno.
Can enable or disable this by editing the following flag in charts/kyverno/value.yaml:
Proof Manifests
Checklist
Further Comments