-
Notifications
You must be signed in to change notification settings - Fork 12
Tolerations and affinity #176
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tolerations and affinity #176
Conversation
pantierra
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Many thanks for the PR! Looks good, already. I was just wondering whether we could avoid setting empty values for affinity and tolerations in the values.yaml. I'd prefer to have them added only if needed.
Co-authored-by: xıʃǝɟ <felix@developmentseed.org>
Co-authored-by: xıʃǝɟ <felix@developmentseed.org>
Co-authored-by: xıʃǝɟ <felix@developmentseed.org>
Co-authored-by: xıʃǝɟ <felix@developmentseed.org>
Co-authored-by: xıʃǝɟ <felix@developmentseed.org>
Co-authored-by: xıʃǝɟ <felix@developmentseed.org>
Thanks for the review @pantierra! I've removed the empty values from values.yaml as you have suggested. I've added them because then they have more visibility, but it would also work if they are mentioned in somewhere else. Btw I used the |
|
No, all good. |
This PR adds tolerations and affinity settings for all pods. This is necessary for clusters with multiple node types, where the operators want to run eoapi on specific ones.
I've tested the change with the following values. With this all pods run on the desired nodes (except for pgbounce and pgbackuprest, but these can be configured by using
postgresclustervalues).The values are a bit repetitive, but it might be a realistic use case that e.g. titiler should run on specific nodes, which would not be possible with a global tolerations/affinity setting (although it would be enough for my use case for now).
I'm not sure if these settings can realistically be tested in CI in k3s. We'd need to have different nodes in k3s and make sure that the services are spawned in the correct ones.
Related discussion: #172