Description
Bug description
Hi,
Not sure where to go with this so figured I'd share the observation in case anyone had any ideas.
We recently upgraded from 1.7 to 1.8.3. On 1.7 we'd periodically see 0DC's (expected, as the client, some of which have a low tolerance to slow reponses, killed the request before a response was returned, thus the DC and 0 response code due to the request not completing).
Note: We also see this on our testing 1.9 cluster
On 1.8, we're seeing DCs with status codes, as well as without:
You can see how pronounced it is here:
And you can see it's across most response codes:
There is no correlation against source or destination workload, it's cluster wide.
Note that these metrics are reporter=source
, so from the source service.
[ ] Docs
[ ] Installation
[x] Networking
[ ] Performance and Scalability
[ ] Extensions and Telemetry
[ ] Security
[ ] Test and Release
[ ] User Experience
[ ] Developer Infrastructure
[ ] Upgrade
Expected behavior
Steps to reproduce the bug
Version (include the output of istioctl version --remote
and kubectl version --short
and helm version --short
if you used Helm)
How was Istio installed?
Environment where the bug was observed (cloud vendor, OS, etc)
Additionally, please consider running istioctl bug-report
and attach the generated cluster-state tarball to this issue.
Refer cluster state archive for more details.