-
Notifications
You must be signed in to change notification settings - Fork 38.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
service.spec.externalIPs is unnecessarily limited (enhancement request) #124636
Comments
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/kind support I don't fully understand the request, NodePorts port range is configurable, you can use them on low ports at your own risk of causing collisions. A Service NodePort when is available is exposed on all Nodes
Should those addresses capture the traffic and forward it to the service? |
The issue here is that nodeport ranges are not necessarily configurable everywhere (eg, I believe EKS doesn't permit this, as the api service runs in their controlplane and you don't get much/any control over that). The intention is that this should behave like
Each entry in
and there's no reason this shouldn't turn into a single rule:
The current situation is that if I want to manage this pseudo-nodeport then I need some active pod monitoring nodes and updating the "externalIPs" entries on services - with the concomitant update cost from kube-proxy massaging rules. |
Can't be implemented for proxy-mode=ipvs afaik, since the addresses must be assigned to the dummy device |
Is |
If so, ref kubernetes-sigs/kubespray#10572 |
/remove kind-bug |
/remove-kind bug |
A few thoughts: First, the There are other ways to get low-ports, including using Can you say more about what is driving this? |
I'm looking for behaviour much like hostPort - or, like a low-numbered NodePort with the topology-aware capabilities coming with k8s 1.30. Effectively this is just using k8s as a matrix for an appliance - I'd like local traffic by default, but some fallback behaviour during periods of change. |
Can you literally use It sounds like you want an implementation of Services with type=LB, but aren't in a cloud? Have you looked at things like metallb? |
What happened?
I'm looking at various ways to expose a service directly from a set of hosts in a k8s cluster. [Yes, I know about the caveats here, but this is a service that doesn't play like a traditional http/rest endpoint.]
I can do this with a svc that has a
port
setting and a list ofspec.externalIPs
- however, I need to maintain that list of externalIPs manually (or write a small oeprator that does so).Since these externalIPs tend to turn into individual iptbles rules (or the moral equivalent, depending on your networking stack), it's surprising that I can't give a CIDR here. That would (a) cut down on the number of IPtables rules that need evaluating, (b) require no dynamic updates.
What did you expect to happen?
I'd like to be able to say
spec.externalIPs: ["169.254.0.0/16"]
and not have to maintain the set of IPs on the svc through some external reconciliation.
How can we reproduce it (as minimally and precisely as possible)?
As above - this is a limitation currently in the validation of externalIPs rules.
Anything else we need to know?
This is related to a slew of questions people ask about exposing NodePorts on low ports. There are legitimate reasons to want to do this (again, largely around services that don't behave like a cloudy rest service, but those definitely exist), but there's always been a reasonable degree of pushback from supporting this - also for legitimate reasons.
Kubernetes version
all current
Cloud provider
n/a
OS version
n/a
Install tools
n/a
Container runtime (CRI) and version (if applicable)
n/a
Related plugins (CNI, CSI, ...) and versions (if applicable)
n/a
The text was updated successfully, but these errors were encountered: