You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are running consul in AKS with multiple clusters each with consul 3 node servers with hundreds of members in the members list.
We noticed recently that two different AKS clusters were sharing their members list which we do not want. each cluster should only have the members from that cluster.
We did ensure we have firewalls between the two AKS clusters (both are in same network but have unique subnets) to block any consul ports but within 6 hours of tearing down consul (helm uninstall + kubectl delete ns) and re-deploying it, the other cluster's members list came back.
Reproduction Steps
We don't really know how to reproduce it or why it's happening.
Consul info for both Client and Server
build:
prerelease =
revision = 2c56447e
version = 1.11.1
Operating system and Environment details
AKS 1.27 clusters
If this is just how consul works, please guide me to where the documentation on how to prevent cross cluster data leakage - we tried blocking all the consul ports (8600 8500 8501 8502 8503 8300 8301 8302) - are we missing something?
I was not able to find anything about preventing data sharing between clusters - just how to do the opposite, i.e. sharing across DCs.
The text was updated successfully, but these errors were encountered:
Overview of the Issue
We are running consul in AKS with multiple clusters each with consul 3 node servers with hundreds of members in the members list.
We noticed recently that two different AKS clusters were sharing their members list which we do not want. each cluster should only have the members from that cluster.
We did ensure we have firewalls between the two AKS clusters (both are in same network but have unique subnets) to block any consul ports but within 6 hours of tearing down consul (helm uninstall + kubectl delete ns) and re-deploying it, the other cluster's members list came back.
Reproduction Steps
We don't really know how to reproduce it or why it's happening.
Consul info for both Client and Server
Operating system and Environment details
AKS 1.27 clusters
If this is just how consul works, please guide me to where the documentation on how to prevent cross cluster data leakage - we tried blocking all the consul ports (8600 8500 8501 8502 8503 8300 8301 8302) - are we missing something?
I was not able to find anything about preventing data sharing between clusters - just how to do the opposite, i.e. sharing across DCs.
The text was updated successfully, but these errors were encountered: