Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error IST0101 on VirtualService from analyze when using a cluster dnsDomain != "cluster.local" #33174

Open
primeroz opened this issue May 28, 2021 · 8 comments · May be fixed by #51064
Open
Assignees
Labels
area/networking area/user experience lifecycle/staleproof Indicates a PR or issue has been deemed to be immune from becoming stale and/or automatically closed

Comments

@primeroz
Copy link

Bug description
Reopening of #30807 hoping to get some feedback this time.

When setting a cluster dnsDomain for kubernetes that is different from the default cluster.local and a cross-namespace Fully qualified service name is used in a VirtualService then istioctl analyze will always print an ERROR IST0101 Referenced host not found

This is true for any value of the dnsDomain other than cluster.local , even cluster1.local

Note that the configuration in the envoy proxy seems to be applied correctly

The cluster is created using kubeadm

  • setting the value of networking.dnsDomain to a custom value ( cluster1.local )
  • Istio Resource is customized with the custom dnsdomain
    • meshConfig.trustDomain
    • values.global.proxy.clusterDomain
    • values.global.trustDomain

I think those are the right settings for the custom dnsDomain and i could not find any other place where i should change

[ ] Docs
[ ] Installation
[x] Networking
[ ] Performance and Scalability
[ ] Extensions and Telemetry
[ ] Security
[ ] Test and Release
[x] User Experience
[ ] Developer Infrastructure
[ ] Upgrade

Expected behavior

No error is printed unless there is an error

Steps to reproduce the bug
Steps are complex to be completely listed here so I created a GIST , this steps use a test repo of mine to help with scripts and setup

  • Create a kubernetes cluster with a custom Dns Domain != "cluster.local"
  • Setup IstioD , using the same dns domain in the right settings
    • meshConfig.trustDomain
    • values.global.proxy.clusterDomain
    • values.global.trustDomain
  • Create workloads and services in 2 namespaces with istio-injection enabled
  • Create a VS in 1 namespace with a destination host in the other namespace using a fully qualified name service-name.namespace.svc.customDnsDomain
  • run istioctl analyze and check that you get the IST0101 error
  • Check that your virtualservice has actually propagated to the envoy proxies

Version (include the output of istioctl version --remote and kubectl version --short and helm version --short if you used Helm)

  • Istio 1.5.10 / 1.6.14 / 1.7.7 / 1.8.3 / 1.9.0
  • Kubernetes 1.16.15 / 1.9.7

How was Istio installed?

Istio-Operator

Environment where the bug was observed (cloud vendor, OS, etc)
AWS and Kind

Additionally, please consider running istioctl bug-report and attach the generated cluster-state tarball to this issue.
Refer cluster state archive for more details.

@istio-policy-bot istio-policy-bot added the lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while label Aug 26, 2021
@primeroz
Copy link
Author

Would it be possible to get either an ACK on this issue or a Delete ?

I mean ... i can keep recreating everytime it gets closed by the bot by it feels like a waste of time :)

Thanks!

@istio-policy-bot istio-policy-bot added the lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. label Sep 11, 2021
@Strasser-Pablo
Copy link

Got exactly the same problem.

@hanxiaop
Copy link
Member

hanxiaop commented Jan 8, 2024

I think there is a issue similar to this one. I will take a look.

@hanxiaop
Copy link
Member

hanxiaop commented Jan 8, 2024

#48326

@kaiburjack
Copy link

@primeroz , @hanxiaop : Both tickets (this one and #48326) got now auto-closed because it did not receive any updates from Istio team members.
I was surprised at first sight of the apparently low number of open issues on the Istio GitHub repository, but now I understand why: Most of the issues simply get auto-closed.
This is rather discouraging for people who find issues and report them and then, years after years, nothing happens.
I understand that the Istio team is probably not that large and cannot tackle hundreds of new issues every week and some triaging must take place so as to not drown in thousands of issues over time... But they're still issues nonetheless.

I think having a disclaimer at the start of writing a new issue that says:

Your issues is likely not going to be handled and will be auto-closed soon, so think twice if you really want to open one

would probably be helpful.

@hanxiaop
Copy link
Member

@kaiburjack I'll reopen this. I think @nicole-lihui may have already working on this, but not sure the progress though.

@hanxiaop hanxiaop reopened this Mar 29, 2024
@istio-policy-bot istio-policy-bot removed the lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while label Mar 29, 2024
@hanxiaop hanxiaop added lifecycle/staleproof Indicates a PR or issue has been deemed to be immune from becoming stale and/or automatically closed and removed lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. labels Mar 29, 2024
@hanxiaop
Copy link
Member

@kaiburjack Sorry for missing the issue. Usually I would comment with not stale or add a staleproof label.

@nicole-lihui
Copy link
Member

👀 I kinda forgot about this issue too.
I've got a few ideas, now I'm ready to tackle it.

@nicole-lihui nicole-lihui self-assigned this Apr 3, 2024
nicole-lihui added a commit to nicole-lihui/istio that referenced this issue May 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking area/user experience lifecycle/staleproof Indicates a PR or issue has been deemed to be immune from becoming stale and/or automatically closed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants