Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

no healthy upstream when simultaneously creating Service+Ingress #35309

Closed
clementnuss opened this issue Sep 22, 2021 · 2 comments
Closed

no healthy upstream when simultaneously creating Service+Ingress #35309

clementnuss opened this issue Sep 22, 2021 · 2 comments
Labels
area/networking lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while

Comments

@clementnuss
Copy link

clementnuss commented Sep 22, 2021

Bug Description

I am currently testing/comparing a series of Ingress Controllers, and for that matter am testing the behaviour of the controllers in dynamic Ingress/Service environments.

A test round unfolds as follows: a bash script automatically creates 50 Services+Ingresses, runs a series of k6 load tests while there is no change made to the Ingress definitions and then another series of k6 tests while randomly modifying the ingresses (shuffling backends, deleting/re-creating). Finally, the script deletes the namespace containing the test deployment (and therefore the services and the ingresses)

If I run the same test round / bash script directly afterwards, I get a lot of no healthy upstream response from Istio. Instead of 100% of 200/OK HTTP answers, I get ~40% of 503/No healthy upstream responses. The only solution then is to kill istiod and ìstio-ingressgateway`, then everything is perfectly fine again !

I think this issue is related to #35172 and #35293 , and I have observed that when I create my Services, sleep 1, and then create the Ingresses, I do not get the issue I just described and Istio is working perfectly fine, even after several creation/tests/deletions rounds

Version

❯ istio-1.11.2/bin/istioctl --istioNamespace kube-istio version
client version: 1.11.2
control plane version: 1.11.2
data plane version: 1.11.2 (5 proxies)
❯ kubectl version --short
Client Version: v1.22.2
Server Version: v1.21.4

Additional Information

No response

@ramaraochavali
Copy link
Contributor

This seems to be the same issue as #35293. Sorry for the regression. This #35298 fixes it for 1.11

@istio-policy-bot istio-policy-bot added the lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while label Dec 22, 2021
@istio-policy-bot
Copy link

🚧 This issue or pull request has been closed due to not having had activity from an Istio team member since 2021-09-22. If you feel this issue or pull request deserves attention, please reopen the issue. Please see this wiki page for more information. Thank you for your contributions.

Created by the issue and PR lifecycle manager.

@istio-policy-bot istio-policy-bot added the lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. label Jan 6, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while
Projects
None yet
Development

No branches or pull requests

3 participants