-
Notifications
You must be signed in to change notification settings - Fork 1
feat!: Make NodePort stickiness configurable #340
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
By pinning the Pod to a specific (stable) Kubernetes node, stable addresses can be | ||
provided using NodePorts. The stickiness is achieved by listener-operator by setting the | ||
`volume.kubernetes.io/selected-node` annotation on the Listener PVC. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should avoid calling this "sticky" and "stickiness" as that already has meaning with Load Balancers, and is not about hard-pinning, but rather, preferred affinity for sessions.
Maybe "pinning" or "affinity" is better. And the setting could be nodePinning
or strictNodeAffinity
.
However, this only works on setups with long-living nodes. If your nodes are rotated on | ||
a regular basis, the Pods previously running on a removed node will be stuck in Pending | ||
until you delete the PVC with the stickiness. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't quite recall, but how are PVCs bound to a node? Is it the IP?
That is, if the node reboots, but it is "who it used to be" (same IP, same node name, same node labels, etc...) then that should be fine. What stops it being fine? A change in Node IP, Node Name, node labels?
I think it is worth defining long-lived.
I can imagine on-prem, the node is the same, but it can come down for reboot. New nodes might be added to scale out, and might sometimes disappear. In cloud environments, you typically throw away old nodes and move across to new nodes (when updating for example).
Description
Part of stackabletech/issues#770
Needs stackabletech/operator-rs#1105
Definition of Done Checklist
Author
Reviewer
Acceptance
type/deprecation
label & add to the deprecation scheduletype/experimental
label & add to the experimental features tracker