-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Requests fail when src & dst are the same #1585
Comments
Well that sounds interesting. Is there anything special with your application? Are you using TLS? Could we see your k8s resource yaml? |
So no TLS. It was initially found on a deployment running a Node.js GraphQL application on port 80. I've since reproduced this on every other deployment I've tried. Here is the simplest reproduction that I've found:
|
That's fantastic replication steps, thank you! |
Quick update - just upgraded to |
I'd be curious to see what |
Hm, so if the dst is a socket address, the proxy will use it directly, which would explain the loopback succeeding. However, if it's hostname, then it will either:
Is it possible to collect debug logs from the proxy? Or do we have an environment that I can poke into and enable them myself? |
So I used my steps from above. Then after sending 4
@olix0r hope that helps! |
Having same issue. We have a pod that acts as an authorization microservice, this pod can make requests to itself to check other permissions, so the hostname is
|
All this makes me wonder if something is preventing the connection from being redirected to the proxy. Perhaps something in the |
Actually, while there was a proxy change for this, it won't be fixed until the iptables config is changed in this repo also. |
When requests from a pod send requests to itself, the proxy properly redirects traffic from the originating container in the pod through the outbound listener of the proxy. Once the request ends on the inbound side of the proxy, it skips the proxy and calls the original container that made the request. This can cause problems for containers that serve HTTP as the proxy naively tries to initiate an HTTP/2 connection to the destination of a request. (See #1585 for a concrete example) This PR adds a new iptable rule, coupled with a proxy [change](linkerd/linkerd2-proxy#122) ensure that requests from a that occur in the aforementioned scenario, always redirect to the inbound listener of the proxy first. fixes #1585 Signed-off-by: Dennis Adjei-Baah <dennis@buoyant.io>
Thanks for fixing this! |
I'm seeing the same Steps to recreate here: https://github.com/glindsell/free-peer/tree/ingress/stream-meshed |
@glindsell thanks for putting together a repro and sharing! It's a little hard to tease out a clear problem description from that README, though. Would you mind opening a new issue so that we can make sure we get to the bottom of it? |
Hello,
When making HTTP1.1 requests where the
src
anddst
are the same (a pod sending a request to itself) the proxy responds with a 500 status code. If the request is sent to a different pod in the deployment, everything works fine. If you send the request on the loopback address rather than the service DNS name, that is also fine. Is this expected?Using the default sidecar generated from
linkerd inject
Thanks
The text was updated successfully, but these errors were encountered: