Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Helm chart support for k8s node tolerations #13214

Merged
merged 5 commits into from
Apr 29, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions integration/kubernetes/helm-chart/alluxio/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -163,3 +163,7 @@
0.6.17

- Add hostAliases in Master and Worker Pods

0.6.18

- Add support for Node tolerations
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,13 @@ spec:
{{ toYaml .Values.fuse.nodeSelector | trim | indent 8 }}
{{- else if .Values.nodeSelector }}
{{ toYaml .Values.nodeSelector | trim | indent 8 }}
{{- end }}
tolerations:
{{- if .Values.fuse.tolerations }}
{{ toYaml .Values.fuse.tolerations | trim | indent 8 }}
{{- end }}
{{- if .Values.tolerations }}
{{ toYaml .Values.tolerations | trim | indent 8 }}
{{- end }}
securityContext:
runAsUser: {{ .Values.fuse.user }}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,13 @@ spec:
heritage: {{ .Release.Service }}
role: alluxio-logserver
spec:
tolerations:
{{- if .Values.logserver.tolerations }}
{{ toYaml .Values.logserver.tolerations | trim | indent 8 }}
{{- end }}
{{- if .Values.tolerations }}
{{ toYaml .Values.tolerations | trim | indent 8 }}
{{- end }}
containers:
- name: alluxio-logserver
image: {{ .Values.image }}:{{ .Values.imageTag }}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,13 @@ spec:
{{ toYaml .Values.master.nodeSelector | trim | indent 8 }}
{{- else if .Values.nodeSelector }}
{{ toYaml .Values.nodeSelector | trim | indent 8 }}
{{- end }}
tolerations:
{{- if .Values.master.tolerations }}
{{ toYaml .Values.master.tolerations | trim | indent 8 }}
{{- end }}
{{- if .Values.tolerations }}
{{ toYaml .Values.tolerations | trim | indent 8 }}
{{- end }}
securityContext:
runAsUser: {{ .Values.user }}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,13 @@ spec:
{{ toYaml .Values.worker.nodeSelector | trim | indent 8 }}
{{- else if .Values.nodeSelector }}
{{ toYaml .Values.nodeSelector | trim | indent 8 }}
{{- end }}
tolerations:
{{- if .Values.worker.tolerations }}
{{ toYaml .Values.worker.tolerations | trim | indent 8 }}
{{- end }}
{{- if .Values.tolerations }}
{{ toYaml .Values.tolerations | trim | indent 8 }}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jiacheliu3 Is this what you meant by union? This will include duplicate definitions if the same toleration is in both .Values.tolerations and .Values.xxx.tolerations but I think that's on the user if they do that. Also I tested duplicate toleration definitions and I don't think it matters to Kubernetes anyway

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly this is what I meant

{{- end }}
containers:
- name: alluxio-worker
Expand Down
11 changes: 11 additions & 0 deletions integration/kubernetes/helm-chart/alluxio/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,12 @@ properties:
# Use labels to run Alluxio on a subset of the K8s nodes
# nodeSelector: {}

# A list of K8s Node taints to allow scheduling on.
# See the Kubernetes docs for more info:
# - https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
# eg: tolerations: [ {"key": "env", "operator": "Equal", "value": "prod", "effect": "NoSchedule"} ]
# tolerations: []
ZhuTopher marked this conversation as resolved.
Show resolved Hide resolved

## Master ##

master:
Expand Down Expand Up @@ -84,6 +90,7 @@ master:
# JVM options specific to the master container
jvmOptions:
nodeSelector: {}
tolerations: []
podAnnotations: {}

jobMaster:
Expand Down Expand Up @@ -178,6 +185,7 @@ worker:
# JVM options specific to the worker container
jvmOptions:
nodeSelector: {}
tolerations: []
podAnnotations: {}

jobWorker:
Expand Down Expand Up @@ -302,6 +310,8 @@ fuse:
limits:
cpu: "4"
memory: "4G"
nodeSelector: {}
tolerations: []
podAnnotations: {}


Expand Down Expand Up @@ -407,6 +417,7 @@ logserver:
# JVM options specific to the logserver container
jvmOptions:
nodeSelector: {}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jiacheliu3 I removed this because it was (and will still be) unused

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, let's keep it this way (don't remove this line).

One use case I can imagine is, there are nodes that have storage and nodes that do not. By defining a selector, the logserver is scheduled to a node that has storage, so that the logs are persisted to the storage.

Another argument is, let's do not take away what's there, unless the whole PR is intended for that. In this PR we add toleration, so let's not touch unrelated stuff. In the next PR just for nodeSelector, we can do the refactor as we redesign.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ZhuTopher I'll merge after this is reverted. Thanks!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ZhuTopher ooops sorry I seem to have clicked resolve unintendedly. Could you revert this line and then I'll merge? Thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh sorry, I didn't notice this comment previously my bad! The reason I removed this from the Helm chart values.yaml is because this field is unused in templates/logserver/deployment.yaml.

I tried to add the nodeSelector behaviour from our other templates to it but since we decided to leave it out of this PR I also removed the related chart value to avoid confusion (chart implies you can specify a nodeSelector when you actually can't).

Anyway, again we can leave all this to a separate 'nodeSelector'-PR so I'll revert it for now.

tolerations: []
# volumeType controls the type of log volume.
# It can be "persistentVolumeClaim" or "hostPath" or "emptyDir"
volumeType: persistentVolumeClaim
Expand Down