-
Notifications
You must be signed in to change notification settings - Fork 7.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix and test multicluster with .global stub domain #29335
Comments
Use Case: StatefulSet replicas (redis & cockroachdb) - multi-cluster / multi-network
|
@nmittler is this still desired? |
@howardjohn yeah, but it's not a blocker for 1.9. Removed the milestone |
Think we will do this in 1.9.x? Or is it 1.10 |
Not entirely sure if we have bandwidth ... it would be good to get in 1.9.x. I suspect 1.10 at the earliest |
@nmittler are we pushing this off for 1.11? |
@brian-avery Yes, I've updated the milestone. |
@stevenctl I'm assigning to you so we don't forget about this. Feel free to prioritize or re-assign. |
@nmittler @stevenctl Are we pushing this to 1.12? |
Either pushing or dropping in favor of something else. |
Sorry to jump in, but I'm curious how this Although it is quite cumbersome to define many ServiceEntry resources in multiple clusters, it is still great to have such an option available. I understand this is not the most standard setup, and am not sure how this plays with other multicluster related efforts and changes such as Kubernetes MCS, but possibly worth having a dedicated section in the documentation. If there is anything I can help with, I'd be more than happy to! |
For granular control, we still have the clusterLocal option in mesh config although it's not the best API. We're also adding some new locality load balancing configuration options, which again isn't the best form of policy for this since it only sets priority and doesn't completely restrict using the different endpoints. istio/api#2043 I think MCS is at the root of how we tackle this problem, but we need our own stable API for it that works with the newer multi-cluster model. |
Thanks for the details, those are interesting, but indeed do not provide the granular control I'm after. Although I understand there will be further changes with MCS and other API updates, using ServiceEntry + DestinationRule + Gateway resources to get the granular multicluster control is stable from Istio API point of view, and potentially worth adding documentation for? Even with the new multicluster model Istio introduces in the future, I imagine the above resources will keep their backward compatibility, and having doc test for such use case may be helpful for migration when the new model is ready? If that sounds like a possibility, I can work on creating a new dedicated doc with test steps to see how that works with other doc. I've never worked on the doc test, but the steps look to be well documented so I can give it a try 👍 |
@stevenctl @nmittler Are we pushing this to 1.13? |
@Kmoneal yes, I've updated the milestone. |
Could you expand a bit more on your use case? Are you just trying to call service endpoints in a particular cluster? Say, from a client running in |
Yes, that's correct. In our use case, If I'm not mistaken, the same can be achieved with the MCS ServiceExport, but that would require GKE or Submariner from my understanding. We have been using Istio for a while, so MCS was not an option back then. We are currently on an older version of Istio, working on the upgrades to start experimenting with MCS, so that may be the way forward instead. |
@nmittler Great, thanks for the pointer - just for me to better understand this, does this mean to add a new label on |
You won't have to add the labels on the Service, just use the label as a selector in the DR/VS. IIUC The DR/VS should exist in each cluster, and the VS will have a per-cluster variation to select the "targetCluster" from the different subsets. |
Ah sorry I misunderstood that part, with |
So the DR will be something like: apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: mysvc-dr
spec:
host: mysvc.myns.svc.cluster.local
subsets:
- name: cluster-1
labels:
topology.istio.io/cluster: cluster-1
- name: cluster-2
labels:
topology.istio.io/cluster: cluster-2 Then in the VS you can route however you'd like based on the labels of the source workload. In the example below, I make all requests stay in the same cluster (e.g. clients in cluster-1 only call endpoints that are also in cluster-1): apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: mysvc-vs
spec:
hosts:
- mysvc.myns.svc.cluster.local
http:
- name: "cluster-1-local"
match:
- sourceLabels:
topology.istio.io/cluster: "cluster-1"
route:
- destination:
host: mysvc.myns.svc.cluster.local
subset: cluster-1
- name: "cluster-2-local"
match:
- sourceLabels:
topology.istio.io/cluster: "cluster-2"
route:
- destination:
host: mysvc.myns.svc.cluster.local
subset: cluster-2 |
@stevenctl @howardjohn IIRC this is no longer feasible since a number of other changes have widened the gap with getting the stub domain working properly. I believe with workarounds such as #29335 (comment) the need for the stub domain is significantly reduced. Can we just close this? |
SGTM – that seems to be where every conversation we've had ended up. |
As of Istio 1.8, we no longer recommend using the
.global
stub domain for multi-primary configurations.It was never well tested and has been in some state of broken since 1.6, as shown in bug reports:
We should add some minimal testing and get it working again to ease users migration story from earlier (alpha) versions of multicluster.
[ ] Docs
[ ] Installation
[x] Networking
[ ] Performance and Scalability
[ ] Extensions and Telemetry
[ ] Security
[x] Test and Release
[ ] User Experience
[ ] Developer Infrastructure
[ ] Upgrade
Expected behavior
Steps to reproduce the bug
Version (include the output of
istioctl version --remote
andkubectl version --short
andhelm version --short
if you used Helm)How was Istio installed?
Environment where the bug was observed (cloud vendor, OS, etc)
Additionally, please consider running
istioctl bug-report
and attach the generated cluster-state tarball to this issue.Refer cluster state archive for more details.
The text was updated successfully, but these errors were encountered: