New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Istio Ingress Gateway listens on 0.0.0.0:15090 #21202
Comments
The istio-ingressgateway service does not expose 15090 so I don't think this is something that can be hit externally |
I don't think it can be hit externally. If you try, it just times out, the same as any other port that is not shown as open (say, like 789). Otherwise I would have flagged it as a security issue. It isn't, because the traffic is not connected to anything. But, it's still listening. If I get the service, it says:
So, not listening on 15090. But, all of these (except 443, which I added) are not listed by |
15090 is listening so prometheus can scrape it, but its not exposed in the service since it shouldn't be externally accessed. The the ones are in the Service but not listening because you don't have a Gateway set up. Technically they don't need to be in the Service, I think we just put them there by default as they are common ports to use, so we don't want people to modify the Service every time they want to expose port 443. |
I don't have a Gateway set up for 15090 either, but yet it's still listed. I only have one set up for 443 (which is also listening, as it should). Here is my Gateway and VirtualService sections. This replaces the Port 80 default section:
|
yes 15090 is always there no matter what you do so that stats work |
That's unfortunate, and may disqualify Istio for us. I'll have to talk to the team. Thanks for the information. |
here is the relevant config https://github.com/istio/istio/blob/release-1.4/tools/packaging/common/envoy_bootstrap_v2.json#L483 we could probably make it optional |
The ideal solution would be for it to listen to traffic from the cluster so that Prometheus could still scrape it, but I realize that's hard because we don't know what the IP is at the time the template is processed. Honestly, I'd accept no prometheus stats from the ingress gateway if that meant that port wasn't listening anymore. If it ended up affecting all envoy instances (so we got no prometheus telemetry) that is a cost I'd be reluctant to pay. |
just trying to understand the use case more, what is the specific issue with this port on the gateway? and why is it bad on the gateway but not the sidecars (since neither are exposed on Services)? |
you might be able to remove it yourself with https://github.com/istio/istio/tree/master/samples/custom-bootstrap but it's a bit hard since the merging won't remove things. You could also build your own image with the config removed, but that may not be feasible. I guess you could also overwrite the template by mounting your own config in the same location (volume mounts override files in the image) |
Honestly? Because when I dump out the proxy-config it shows as listening on 0.0.0.0 and the CISO will fail it because of that. I can explain that the service description lists listeners but they can't be gotten to because there's nothing listening in the proxy-config, and he'll buy that. But, if I explain that there's something listening in the proxy config but it doesn't go anywhere, he'll insist that I shut it off - which is why I'm trying to do it before I present it for security review. |
It's not so much that it doesn't go anywhere, it's just not reachable externally - only directly by the pod IP address, which is only accessible from within the cluster not external clients |
I am not saying your concerns are not valid by the way - seems reasonable to want to minimize what we are listening on |
By "not going anywhere", I meant the ingress traffic. As in, the Gateway -> VirtualService routing routes 443 to someplace useful. Since there are no rules on 15090, even though it is listening, the traffic doesn't go anywhere because there are no rules. And I never took your questions as invalidating my concerns, merely offering explanation and options. No worries. |
🚧 This issue or pull request has been closed due to not having had activity from an Istio team member since 2020-02-17. If you feel this issue or pull request deserves attention, please reopen the issue. Please see this wiki page for more information. Thank you for your contributions. Created by the issue and PR lifecycle manager. |
Bug description
According to
istioctl proxy-config listener
, a default istio-ingressgateway install listens on 0.0.0.0:80 and 0.0.0.0:15090.The former is not problematic - it is an example, and is easily modified for purpose or deleted.
However, the latter is proving quite difficult because I cannot seem to figure out how to turn it off. It doesn't seem to do anything when one connects from the world (it just times out), but something is still listening, and there doesn't seem to be a way to turn it off. There should be, because extra ports listening are security.
As I understand it, 15090 is open so Prometheus can query stats. However, that should not be passed through to the outside worlds. It seems that the other open statistics ports behave this way, and this one should as well.
[ ] Configuration Infrastructure
[ ] Docs
[ ] Installation
[X ] Networking
[ ] Performance and Scalability
[ ] Policies and Telemetry
[ X] Security
[ ] Test and Release
[ ] User Experience
[ ] Developer Infrastructure
Expected behavior
The default configuration should only have the example port 80 listening.
Steps to reproduce the bug
istioctl proxy-config listener <pod>
and see the listeners.Version (include the output of
istioctl version --remote
andkubectl version
andhelm version
if you used Helm)Istioctl: 1.4.3
Kubectl: 1.16.3
How was Istio installed?
istioctl manifest generate --set profile=sds
--set values.sidecarInjectorWebhook.rewriteAppHTTPProbe=true
--set values.gateways.istio-ingressgateway.sds.enabled=true
--set values.tracing.enabled=true
--set values.grafana.enabled=true
--set values.kiali.enabled=true
--set "values.kiali.dashboard.grafanaURL=http://grafana:3000"
--set "values.kiali.dashboard.jaegerURL=http://jaeger-query:16686"
| kubectl apply -f
Environment where bug was observed (cloud vendor, OS, etc)
Amazon EKS.
The text was updated successfully, but these errors were encountered: