-
Notifications
You must be signed in to change notification settings - Fork 1
feat!: Make NodePort stickiness configurable #340
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -82,6 +82,21 @@ spec: | |
- LoadBalancer | ||
- ClusterIP | ||
type: string | ||
stickyNodePorts: | ||
default: false | ||
description: |- | ||
Wether a Pod exposed using a NodePort should be pinned to a specific Kubernetes node. | ||
|
||
By pinning the Pod to a specific (stable) Kubernetes node, stable addresses can be | ||
provided using NodePorts. The stickiness is achieved by listener-operator by setting the | ||
`volume.kubernetes.io/selected-node` annotation on the Listener PVC. | ||
|
||
However, this only works on setups with long-living nodes. If your nodes are rotated on | ||
a regular basis, the Pods previously running on a removed node will be stuck in Pending | ||
until you delete the PVC with the stickiness. | ||
Comment on lines
+94
to
+96
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I can't quite recall, but how are PVCs bound to a node? Is it the IP? That is, if the node reboots, but it is "who it used to be" (same IP, same node name, same node labels, etc...) then that should be fine. What stops it being fine? A change in Node IP, Node Name, node labels? I think it is worth defining long-lived. I can imagine on-prem, the node is the same, but it can come down for reboot. New nodes might be added to scale out, and might sometimes disappear. In cloud environments, you typically throw away old nodes and move across to new nodes (when updating for example). |
||
|
||
Because of this we don't enable stickiness by default to support all environments. | ||
type: boolean | ||
required: | ||
- serviceType | ||
type: object | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should avoid calling this "sticky" and "stickiness" as that already has meaning with Load Balancers, and is not about hard-pinning, but rather, preferred affinity for sessions.
Maybe "pinning" or "affinity" is better. And the setting could be
nodePinning
orstrictNodeAffinity
.