-
Notifications
You must be signed in to change notification settings - Fork 7.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Omitting listener for management address #1194
Comments
Istio 2.7, Kubernetes 1.7.8 on GKE, coreOS nodes |
same with istio 0.2.10 I had my application container changed so it won't restart when istio proxy is not yet ready, same behaviour... |
I've just deployed Istio onto an alpha GKE 1.8.1 cluster with RBAC and initializers enabled. I've deployed microservices-demo. Specifically, I've only deployed the front-end service. I'm seeing this same issue after creating just a Deployment (with a single pod) and a corresponding service: (from Pilot):
and from the istio sidecar:
(the latter is printing messages every 2 seconds). This also causes my application pod to crash periodically, and my pod to go in and out of service (according to Kubernetes) periodically. |
Ah - so after removing my livenessProbe & readinessProbe from my application deployment, it seems I'm no longer getting this problem. This is roughly in-line with this paragraph in the quick start doc:
(to clarify, my deployment does have mTLS enabled) |
Former-commit-id: 7658c8e02792ce06d6a4f1b2de6ee2bc11bd5a81
Former-commit-id: 6ff864af520cf1c572f876f9d80eea5d76255bfb
@munnerz I'm not using Auth (mTLS) at all. My issue is different in that whatever I deploy, I have the |
seeing this with 0.2.12 as well
Due to which calls to the service are not being routed through proxy,
As we can see the Server is nginx, rather than envoy |
One more point if it helps But if this is done
Then I start seeing the collision in istio-proxy container log |
Unfortunately, there is an issue with liveness/health probes on the pod ports that are used for the service. The warning you see is about the case when the pod port is exposed as a service, and also used as a liveness/health probe. The simplest workaround (that also works for mTLS mode) is to use exec probes with curl, until we fix this bug properly in the upcoming releases. |
@kyessenov Thanks for the reply |
@kyessenov I had a strange (new) behaviour linked to this bug...
But also, it's like there is a network issue between istio-proxy and my processes. I thirst thought it was due to a liveness/readyness issue with my pod, but from K8s side, it's ok :
So everything seems up and I see a live connexion between proxy and process, but still, nothing goes through. Everything is back to normal as soon as I remove the Liveness and Readyness probes. Do you know if another issue is opened regarding this ? Thanks |
I'm not sure if this is related but we're seeing a lot of these same log lines (lds: fetch failure: error adding listener). One pod can't connect to the gRPC service (both in the service mesh). It's receiving a 503. If I tail the However, I don't see it on the server end. |
@kyessenov (or anyone else) Has there been any progress on fixing this (other than the previously-mentioned workarounds)? Thanks. |
*bumps up |
Still happening in 0.4.0 |
As an update ( and possibly close this issue ): If you set a TCP/HTTP readiness/liveness probe on the 'main' port,
We are working on a patch to auto-detect TLS, for some upgrade use IMO the best option is to use a separate port for liveness/readiness probes. If it is not possible to change the app for separate port, the 'exec' probe is the Finally, we can (and plan to) add an extra /healtz to the sidecar, and associated Regarding warnings/messages: we should reword them a bit and avoid repeating, |
@costinm, it's still not working, even without TLS (point No 1). |
Hi we run into some kind of the same problem where we got this message :
This happens everytime we do a deployment to our application, which consists of two containers, our application and istio-proxy. We do have HTTP liveness and readiness probe for this application. After some time, the log will print message like segmentation fault, and restart the proxy process, which somehow make it works after that. Segmentation fault message
When we turned down the probes, it starts to work properly right after the deployment finished. Env : May I confirm this is the same problem in this issue ? |
I never had a seg fault, but the |
Hi @prune998 There was no config difference between before and after this crash. The envoy just start to work after this crash. Here are the config
As of our setup, we create our deployment (replica set in this case) with Spinnaker. Istio is installed without tls (just normal istio not istio-auth). The istio sidecar is injected using istio initializer. Here is the replica set yaml
|
Re. segfault - it seems you're using 0.4.0 - I would recommend upgrading to more recent version. It doesn't look to be related to the liveness probe. |
@prune998 can you provide more details - like snippet of the yaml, with the readiness probe, service definition, etc. I can try to reproduce it. |
@costinm @prune998 i have just noticed that this issue might be related to this one #2628, and its more straight forward saying the problem with probe Describing the pods give me this error This is how the probes defined:
Might be a wild guess but could it be possible that Envoy actually tries to create another 8080 listener even if you have defined one in pod spec for the application container? I have not read envoy code so I am not so sure about this |
"We explicitly create listeners for all health check ports. We also create listeners for all serving ports from the service spec. When the health/liveness port is the same as the normal serving port for the server, we emit an error (since we can't have two listeners on the same port). So long as the health check is HTTP, and the server is serving HTTP on the duplicated port, there's no problem and health checking passes (see the original issue, I posted a github repo showing this works; Tao has also verified this AFAIK)." Quote from @ZackButcher . @costinm , please help to reassign to proper person to fix this. |
I understand that a fix is in the works for this and that the "duplicate address" message is a red herring when mTLS is not being used. What would be the best way to filter out these messages from the istio-proxy logs? It is adding alot of noise and unneeded data being sent to CloudWatch for us. We are using Istio 0.7.1. Thanks! |
Unfortunately today I don't believe there's a way to disable just those log lines. However, in the upcoming 0.8 release you'll have the ability to use the Envoy v2 APIs. The code in Pilot pushing that data should avoid the duplicate port issue => no more logs in that style. Sorry there's not a more immediate solution. |
@ZackButcher @wattli given 0.8 release is out can this issue be closed? |
closing the issue since the fixes went out in 0.8. Please reopen if the issue persists. |
* Few small updates to the mesh config API * Make gen, update * Update based on feedback * Add hide from docs until impl is ready
I have some Kubernetes Deployment/service with Istio sidecart that are generating a lot of warnings :
At the same time I have Envoy sidecart errors too :
The deployment is pretty simple, with port 12900 (grpc) and 12901 (http), in a testing Namespace (staging).
The entrypoint in this image connect to a Kafka broker. If no Kafka broker is reachable, the application quits (and the container exit).
My feeling is that, when everything is started, the UserEdge application can't connect to Kafka because the proxy/network setup is not done yet. So the container quits and start over. Then, Istio or Envoy keeps the old listener in memory and think there is a duplicate...
As far as I can see, this is adding a LOT of warning logs, but the service is working as expected.
I've been looking around, trying to debug the state of Envoy sidecart and Istio mesh, but wasn't able to find anything usefull...
Any help is welcome...
The text was updated successfully, but these errors were encountered: