You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have an issue which is giving 503 intermittent “no healthy upstreams” from istio gateway which is causing issues . and need some assistant Please see the case study:
1 . No healthy upstream errors is showing on istio gateway logs only. When we check the application pod log itself. No errors are showing. Only the healthy 200 ones are showing in application pod
The errors starts showing when istio ingress gateway pod replica is increased
We checked the existing Istio installation , seems good and healthy. also upgraded istio to new version
following are the more details
istioctl analyze --all-namespaces not showing an obvious errors
its clearly showing the envoy [istio-proxy] pod being disconnected from the Mesh
Kube-proxy logs showing a healthy iptables sync
We enabled the debug logs for istio-proxy to understand more about the errors using istioctl proxy-config log -n --level=debug .
We noticed a high error rate for xds server connecting to isitod pilot service
We have an issue which is giving 503 intermittent “no healthy upstreams” from istio gateway which is causing issues . and need some assistant Please see the case study:
1 . No healthy upstream errors is showing on istio gateway logs only. When we check the application pod log itself. No errors are showing. Only the healthy 200 ones are showing in application pod
The errors starts showing when istio ingress gateway pod replica is increased
We checked the existing Istio installation , seems good and healthy. also upgraded istio to new version
following are the more details
istioctl analyze --all-namespaces not showing an obvious errors
its clearly showing the envoy [istio-proxy] pod being disconnected from the Mesh
Kube-proxy logs showing a healthy iptables sync
We enabled the debug logs for istio-proxy to understand more about the errors using istioctl proxy-config log -n --level=debug .
We noticed a high error rate for xds server connecting to isitod pilot service
2024-03-19T15:59:35.670788Z error xdsproxy upstream [6] error: rpc error: code = ResourceExhausted desc = request rate limit exceeded: rate: Wait(n=1) would exceed context deadline
2024-03-19T15:59:35.670816Z warn xdsproxy upstream [6] terminated with unexpected error rpc error: code = ResourceExhausted desc = request rate limit exceeded: rate: Wait(n=1) would exceed context deadline
2024-03-19T15:59:35.671186Z warning envoy config external/envoy/source/extensions/config_subscription/grpc/grpc_stream.h:177 StreamAggregatedResources gRPC config stream to xds-grpc closed: 8, request rate limit exceeded: rate: Wait(n=1) would exceed context deadline (previously 14, closing transport due to: connection error: desc = "error reading from server: EOF", received prior goaway: code: NO_ERROR, debug data: "graceful_stop" since 0s ago) thread=14
2024-03-19T15:59:56.177601Z warning envoy config external/envoy/source/extensions/config_subscription/grpc/grpc_stream.h:177 StreamAggregatedResources gRPC config stream to xds-grpc closed: 14, connection error: desc = "transport: Error while dialing: dial tcp 10.15.1.156:15012: i/o timeout" (previously 8, request rate limit exceeded: rate: Wait(n=1) would exceed context deadline since 20s ago) thread=14
2024-03-19T16:21:54.333274Z error xdsproxy upstream [12] error: rpc error: code = ResourceExhausted desc = request rate limit exceeded: rate: Wait(n=1) would exceed context deadline
2024-03-19T16:21:54.333299Z warn xdsproxy upstream [12] terminated with unexpected error rpc error: code = ResourceExhausted desc = request rate limit exceeded: rate: Wait(n=1) would exceed context deadline
2024-03-19T16:21:54.333713Z warning envoy config external/envoy/source/extensions/config_subscription/grpc/grpc_stream.h:177 StreamAggregatedResources gRPC config stream to xds-grpc closed: 8, request rate limit exceeded: rate: Wait(n=1) would exceed context deadline (previously 14, connection error: desc = "transport: Error while dialing: dial tcp 10.15.1.156:15012: connect: connection refused" since 0s ago) thread=14
The text was updated successfully, but these errors were encountered: