Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Istio Ingress Gateway listens on 0.0.0.0:15090 #21202

Closed
mattcaron opened this issue Feb 17, 2020 · 15 comments
Closed

Istio Ingress Gateway listens on 0.0.0.0:15090 #21202

mattcaron opened this issue Feb 17, 2020 · 15 comments
Labels
area/networking area/security lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while

Comments

@mattcaron
Copy link

mattcaron commented Feb 17, 2020

Bug description

According to istioctl proxy-config listener, a default istio-ingressgateway install listens on 0.0.0.0:80 and 0.0.0.0:15090.

The former is not problematic - it is an example, and is easily modified for purpose or deleted.

However, the latter is proving quite difficult because I cannot seem to figure out how to turn it off. It doesn't seem to do anything when one connects from the world (it just times out), but something is still listening, and there doesn't seem to be a way to turn it off. There should be, because extra ports listening are security.

As I understand it, 15090 is open so Prometheus can query stats. However, that should not be passed through to the outside worlds. It seems that the other open statistics ports behave this way, and this one should as well.

[ ] Configuration Infrastructure
[ ] Docs
[ ] Installation
[X ] Networking
[ ] Performance and Scalability
[ ] Policies and Telemetry
[ X] Security
[ ] Test and Release
[ ] User Experience
[ ] Developer Infrastructure

Expected behavior

The default configuration should only have the example port 80 listening.

Steps to reproduce the bug

  1. Install Istio per the instructions.
  2. Run istioctl proxy-config listener <pod> and see the listeners.

Version (include the output of istioctl version --remote and kubectl version and helm version if you used Helm)

Istioctl: 1.4.3
Kubectl: 1.16.3

How was Istio installed?

istioctl manifest generate --set profile=sds
--set values.sidecarInjectorWebhook.rewriteAppHTTPProbe=true
--set values.gateways.istio-ingressgateway.sds.enabled=true
--set values.tracing.enabled=true
--set values.grafana.enabled=true
--set values.kiali.enabled=true
--set "values.kiali.dashboard.grafanaURL=http://grafana:3000"
--set "values.kiali.dashboard.jaegerURL=http://jaeger-query:16686"
| kubectl apply -f

Environment where bug was observed (cloud vendor, OS, etc)

Amazon EKS.

@howardjohn
Copy link
Member

The istio-ingressgateway service does not expose 15090 so I don't think this is something that can be hit externally

@mattcaron
Copy link
Author

I don't think it can be hit externally. If you try, it just times out, the same as any other port that is not shown as open (say, like 789). Otherwise I would have flagged it as a security issue. It isn't, because the traffic is not connected to anything. But, it's still listening.

If I get the service, it says:

istio-ingressgateway   LoadBalancer   XXXX   XXXX 15020:31496/TCP,80:32610/TCP,443:31962/TCP,15029:32330/TCP,15030:30280/TCP,15031:31339/TCP,15032:31425/TCP,15443:32583/TCP   XX

So, not listening on 15090. But, all of these (except 443, which I added) are not listed by proxy-config - so why is 15090?

@howardjohn
Copy link
Member

15090 is listening so prometheus can scrape it, but its not exposed in the service since it shouldn't be externally accessed.

The the ones are in the Service but not listening because you don't have a Gateway set up. Technically they don't need to be in the Service, I think we just put them there by default as they are common ports to use, so we don't want people to modify the Service every time they want to expose port 443.

@mattcaron
Copy link
Author

I don't have a Gateway set up for 15090 either, but yet it's still listed. I only have one set up for 443 (which is also listening, as it should).

Here is my Gateway and VirtualService sections. This replaces the Port 80 default section:

---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: gateway
spec:
  selector:
    istio: ingressgateway  # use istio default ingress gateway
  servers:
    - port:
        number: 443
        name: https
        protocol: HTTPS
      tls:
        mode: PASSTHROUGH
      hosts:
        - "*"
---

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: gateway-route-port443
spec:
  hosts:
    - "*"
  gateways:
    - gateway
  tls:
    - match:
        - port: 443
          sni_hosts:
            - "*"
      route:
        - destination:
            host: firewall

@howardjohn
Copy link
Member

yes 15090 is always there no matter what you do so that stats work

@mattcaron
Copy link
Author

That's unfortunate, and may disqualify Istio for us. I'll have to talk to the team.

Thanks for the information.

@howardjohn
Copy link
Member

here is the relevant config https://github.com/istio/istio/blob/release-1.4/tools/packaging/common/envoy_bootstrap_v2.json#L483

we could probably make it optional

@mattcaron
Copy link
Author

The ideal solution would be for it to listen to traffic from the cluster so that Prometheus could still scrape it, but I realize that's hard because we don't know what the IP is at the time the template is processed.

Honestly, I'd accept no prometheus stats from the ingress gateway if that meant that port wasn't listening anymore. If it ended up affecting all envoy instances (so we got no prometheus telemetry) that is a cost I'd be reluctant to pay.

@howardjohn
Copy link
Member

just trying to understand the use case more, what is the specific issue with this port on the gateway? and why is it bad on the gateway but not the sidecars (since neither are exposed on Services)?

@howardjohn
Copy link
Member

you might be able to remove it yourself with https://github.com/istio/istio/tree/master/samples/custom-bootstrap but it's a bit hard since the merging won't remove things. You could also build your own image with the config removed, but that may not be feasible. I guess you could also overwrite the template by mounting your own config in the same location (volume mounts override files in the image)

@mattcaron
Copy link
Author

Honestly? Because when I dump out the proxy-config it shows as listening on 0.0.0.0 and the CISO will fail it because of that.

I can explain that the service description lists listeners but they can't be gotten to because there's nothing listening in the proxy-config, and he'll buy that.

But, if I explain that there's something listening in the proxy config but it doesn't go anywhere, he'll insist that I shut it off - which is why I'm trying to do it before I present it for security review.

@howardjohn
Copy link
Member

It's not so much that it doesn't go anywhere, it's just not reachable externally - only directly by the pod IP address, which is only accessible from within the cluster not external clients

@howardjohn
Copy link
Member

I am not saying your concerns are not valid by the way - seems reasonable to want to minimize what we are listening on

@mattcaron
Copy link
Author

By "not going anywhere", I meant the ingress traffic. As in, the Gateway -> VirtualService routing routes 443 to someplace useful. Since there are no rules on 15090, even though it is listening, the traffic doesn't go anywhere because there are no rules.

And I never took your questions as invalidating my concerns, merely offering explanation and options. No worries.

@istio-policy-bot istio-policy-bot added the lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while label May 18, 2020
@istio-policy-bot
Copy link

🚧 This issue or pull request has been closed due to not having had activity from an Istio team member since 2020-02-17. If you feel this issue or pull request deserves attention, please reopen the issue. Please see this wiki page for more information. Thank you for your contributions.

Created by the issue and PR lifecycle manager.

@istio-policy-bot istio-policy-bot added the lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. label Jun 2, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking area/security lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while
Projects
None yet
Development

No branches or pull requests

3 participants