-
Notifications
You must be signed in to change notification settings - Fork 7.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
accessing regular k8s services from istio mesh #506
Comments
If you don't use auth feature, then you should be able to reach non-istio pods from istio pods and vice versa in the normal way. |
@kyessenov I should be able to reach pods or services ?
Everything is working if I don't deploy the pod using the Istio sidecart |
Pods can only be reached through service names in istio (we don't program all individual pod routes). This is likely due to namespacing issue (istio only cares about the namespace it's deployed in, cc @andraxylia ). I'd hope that if you deploy istio in "dev" namespace, it would work (at least we test for that case). |
in fact, everything is deployed in the "dev" namespace in my test.
Any chance to have Istio route to the pod's IP ? I think my usecase is more than usual, especially when you have a "service" like Kafka or Mongodb, with a rich client, where you want your "client" to know about all the existing server endpoints. |
Thanks for the suggestion, we'll consider adding explicit network endpoints for headless TCP services. We were trying to preserve the service abstraction and reduce configuration load, but a rich client trying to address endpoints directly is a legitimate use-case. This is even more true for headless services. At some point, we would want Envoy to take some of the rich client functionality by adding a kafka filter and delegating LB and other features to Envoy from the rich client. This would require a ClusterIP service. Would that make sense to you? |
@kyessenov I'm not sure it's a good idea to go the way of filters in Envoy. You will end re-writing applications clients logic in filter for as lot bunch of applications (Kafka, ZK, Mongo, Cassandra...). The cool thing with rich clients is that they take care of the connection/deconnection/rebalance logic. There is no point going through another tool to gain nothing. My suggestion would be enabling Istio/Envoy to route traffic to headless services, maybe by using some command line like Sadly, this discussion means I can't use Istio for now... at least with Envoy... How would it be with Linkerd ? |
We'll add an option to route directly to endpoints for headless services in the next release. I'm not sure about the state of TCP load balancing for linkerd. |
can't wait for the next release then ! |
It's not the proxy that's the issue. It's pilot. We don't configure Envoy or Linkerd with pod IPs due to the potentially large number of listener blocks or configs. The fix for headless services would be a hack at most. It will face issues as more pods are added to the headless service or removed (if it's a statefulset there might be less churn). The sensible option for this is to have the passthrough mode support implemented in Envoy and then add a genetic tcp proxy listener in Envoy that matches traffic for the kube internal subnet range (e.g. 10.0..) and passes traffic through to original destination and port. Then one would be able to talk to pods directly irrespective of headless services or normal tcp services. We would probably even eliminate tcp proxy configuration completely. |
Here is the issue in Envoy that is attempting to add this support. envoyproxy/envoy#1246 |
@kyessenov @rshriram How about enabling external traffic for TCP? Then headless services can be defined as external services (they are external to Istio). A related question - does Istio handle TCP traffic (non-HTTP/HTTPS) for headful services? |
On Fri, Jul 28, 2017 at 12:11 PM Vadim Eisenberg ***@***.***> wrote:
@kyessenov <https://github.com/kyessenov> @rshriram
<https://github.com/rshriram> How about enabling external traffic for
TCP? Then headless services can be defined as external services (they are
external to Istio).
K8s external services don't support tcp. Secondly, the user wants to
directly talk to pod IP. We need to process pods in statefulset or headless
services like any other pod, there by providing the ability to dynamically
add or remove pods from an upstream cluster.
A related question - does Istio handle TCP traffic (non-HTTP/HTTPS) for
headful services?
We setup tcp proxy in Envoy.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#506 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AH0qd1n9tAP3I-NbP25eWdRRaHU8JFulks5sSggwgaJpZM4OlbzW>
.
--
~shriram
|
How would transparent tcp proxying work with mTLS? |
I opened an issue as well on the other repo. Just gathering them into one. |
This would be very useful. I would like to use Istio but currently cannot because I need my services within the mesh to be able to access DBs and other services that are non-Istiofied. |
I have the same problem, can't access StatefulSet. |
Same as istio/old_pilot_repo#1015 |
@ldemailly those are not the same. This particular issue is about accessing headless services via pod IPs, without out. Its a bug in Pilot. @ijsnellf is working on it. |
afaik the bug is we intercept everything but as long as we fix it's great |
@wattli can you add the details to explained in the morning to this bug |
Do you have any news on this ? |
it likely won't be fixed in the very first 0.2 release but should be soon after depending on your exact case (for instance access to k8s api server and https services should be first to get working) |
@prune998 we have support for headless services in master. If you are feeling a bit adventurous, we would appreciate some feedback if you could try out the istio.yaml from istio/istio master branch. You need to make sure that you name the ports for headless services, and that the port on which the headless service is listening does not collide with istio-fied service ports (e.g., both headless and istio service on port 80). You can find an example of headless service in istio/pilot/test/integration/testdata/ |
that's a good news. Will try it today. |
building the whole stack from master seems to be a mess... I think I'll wait until release (or nightly ?) |
@rshriram do you have a pointer to the commit that added headless services support ? |
It still goes through envoy. |
So maybe check the envoy /stats (on port 15000) and check for various timeout counters, if it's envoy it should be there... |
@rshriram I only see this issue from istiofied pods (I have about 13 pods that are not istiofied, running the same stack that don't see this connection issue). I have one test pod right now, running one of the services that is istiofied to work through these connection issues. It could be some other middleware ultimately responsible for the issue, but I think Istio has something to do with it because of the setup I just described above. I am on GKE, and I don't think they impose any limits like this within their VMs. Also, I am not 100% sure if there is data across the wire or not. I have a connection pool, and I run health checks to the db fairly frequently (several times a minute). I saw this issue even with health checks running, but that could be because only one connection from the pool was being used by the health check, and when I go to do another query, a different connection gets pulled). @prune998 The pod consistently times out if I let it sit without making any database calls. I will try the psql client and see if it has the same issue, that will rule out any library issues I may be running into. Also, I am running a new test without the connection expiration so I can check on the stats endpoint from the istio-proxy container. @rshriram @prune998 Thank you both for the help! Thinking maybe we should put this into a separate issue? |
Yes. Do you have Istio CA enabled? like mTLS enabled? Can you try with istio auth disabled and istio ca not deployed? I have a feeling that we are recycling Envoy every 10-15 minutes to refresh certificates. As part of the recycle, old connections are being terminated. |
@rshriram I didn't install istio CA intentionally, I really don't want to add that layer yet. I used this install command: However, I get this back when I run
Is that istio-ca doing what I think it may be doing? How do I disable it? |
@prune998 I just ran the test using the
I think it is safe to assume it is a connection issue and not a library issue. |
@hollinwilkins try stopping (scale to 0) the istio-ca pod. |
@prune998 Can I just delete the deployment for istio-ca or should I scale? |
delete it if you're sure you;'re not using it... |
@prune998 Hehe, gotcha. Also, there is too much information to sift through when I collect stats from istio-proxy. Is there a grep I can use to get you the useful stuff in regards to disconnects? |
you don't need to touch the CA to turn on/off mtls you just need to : (from the security faq)
|
@ldemailly I just edited the config, that line was already commented. I did see a previous revision of the config had MUTUAL_TLS enabled, but that must have been from a long time ago.
|
Actually, reviewing this:
Coming from the config map, it looks like it was never enabled even in the previous revision. |
well, bug is still here then... I think this will change with the new GRPC API in envoy... |
@prune998 Added a PR for troubleshooting in Istio documentation: istio/istio.io#835 |
@hollinwilkins @prune998 thanks for your patience in troubleshooting this and for writing the troubleshooting guide. It appears there is still a minor bug (with an easy workaround), since pilot-agent should not restart Envoy if encryption is disabled, even if istio-ca is present and certificates are refreshed. Until support for SDS #2120 will make envoy restart completely unnecessary, we need to allow both encrypted and un-encrypted services to co-exist in the same cluster, and we need the istio-ca in general. I opened #2427 for not having to disable istio-ca. We can close this issue. |
@prune998 Starting to see another issue with this. Not sure if it is related to headless services, but it seems istio has an effect here. After deploying a certain number of pods in a namespace, connections to headless services stop working for some reason. I deploy 6 services injected with istio sidecar, and they connect to my database fine. When I deploy the 7th and 8th one, they cannot connect. Deploying all 8 without istio offers no issue connecting to the database. |
* use force flag for set override * Update test * fix conflict
Bump up istio/istio to f527312.
Co-authored-by: maistra-bot <null>
…uster Disable upstream multicluster
While I would like to route my traffic between my applications (http, grpc, tcp) using Istio/Envoy service mesh, some applications also need to reach some core TCP services like Zookeeper or Kafka.
I would like to be able to reach those core services using the regular K8s service endpoints.
app -> envoy proxy -> k8s service (by dns name .)
As far as I've found, it does not seem possible to route traffic out of the mesh, except using a Istio egress, which is http(s) only and is not meant to talk to k8s services.
Do you have any solution or plan for that ?
thanks.
The text was updated successfully, but these errors were encountered: