-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Helm chart support for k8s node tolerations #13214
Changes from all commits
c4d4aed
3a540fe
3c8f0a2
c1609bf
bcfeb7f
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -163,3 +163,7 @@ | |
0.6.17 | ||
|
||
- Add hostAliases in Master and Worker Pods | ||
|
||
0.6.18 | ||
|
||
- Add support for Node tolerations |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -46,6 +46,12 @@ properties: | |
# Use labels to run Alluxio on a subset of the K8s nodes | ||
# nodeSelector: {} | ||
|
||
# A list of K8s Node taints to allow scheduling on. | ||
# See the Kubernetes docs for more info: | ||
# - https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ | ||
# eg: tolerations: [ {"key": "env", "operator": "Equal", "value": "prod", "effect": "NoSchedule"} ] | ||
# tolerations: [] | ||
ZhuTopher marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
## Master ## | ||
|
||
master: | ||
|
@@ -84,6 +90,7 @@ master: | |
# JVM options specific to the master container | ||
jvmOptions: | ||
nodeSelector: {} | ||
tolerations: [] | ||
podAnnotations: {} | ||
|
||
jobMaster: | ||
|
@@ -178,6 +185,7 @@ worker: | |
# JVM options specific to the worker container | ||
jvmOptions: | ||
nodeSelector: {} | ||
tolerations: [] | ||
podAnnotations: {} | ||
|
||
jobWorker: | ||
|
@@ -302,6 +310,8 @@ fuse: | |
limits: | ||
cpu: "4" | ||
memory: "4G" | ||
nodeSelector: {} | ||
tolerations: [] | ||
podAnnotations: {} | ||
|
||
|
||
|
@@ -407,6 +417,7 @@ logserver: | |
# JVM options specific to the logserver container | ||
jvmOptions: | ||
nodeSelector: {} | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @jiacheliu3 I removed this because it was (and will still be) unused There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Actually, let's keep it this way (don't remove this line). One use case I can imagine is, there are nodes that have storage and nodes that do not. By defining a selector, the logserver is scheduled to a node that has storage, so that the logs are persisted to the storage. Another argument is, let's do not take away what's there, unless the whole PR is intended for that. In this PR we add toleration, so let's not touch unrelated stuff. In the next PR just for nodeSelector, we can do the refactor as we redesign. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @ZhuTopher I'll merge after this is reverted. Thanks! There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @ZhuTopher ooops sorry I seem to have clicked resolve unintendedly. Could you revert this line and then I'll merge? Thanks! There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Oh sorry, I didn't notice this comment previously my bad! The reason I removed this from the Helm chart I tried to add the Anyway, again we can leave all this to a separate 'nodeSelector'-PR so I'll revert it for now. |
||
tolerations: [] | ||
# volumeType controls the type of log volume. | ||
# It can be "persistentVolumeClaim" or "hostPath" or "emptyDir" | ||
volumeType: persistentVolumeClaim | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jiacheliu3 Is this what you meant by union? This will include duplicate definitions if the same toleration is in both
.Values.tolerations
and.Values.xxx.tolerations
but I think that's on the user if they do that. Also I tested duplicate toleration definitions and I don't think it matters to Kubernetes anywayThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exactly this is what I meant