New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ipv6 validation #3284
Comments
I think that all the conditions I can think of for this to work are:
I think that any cluster that's actually working in IPv6 only mode should probably work. I don't know about dual-stack, it's possible we may need to do some work with Endpoints, I haven't looked at dual-stack enough to be sure. In any case, the steps from here seem to be the same:
Ideally we figure out a way to run the current set of integration tests in both those cases. |
What do we think about running some validation (the integration test suite probably) against an ipv6 cluster in CI and writing a guide on what all the flags to flip are to support a cluster with ipv6 only? Initially seems like some bits to do around Cluster discovery (see #3564) and how we parse ipv6 addresses consistently, there are some rough edges where some flags need to take |
I’m going to tag this v1.15 to account for the ongoing investigation work, not to mean to be delivered in 1.15. |
Strict/Logical DNS clusters fail to parse ipv6 IPs and according to the Envoy documentation should not use an IP See: https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/service_discovery#arch-overview-service-discovery-types See: envoyproxy/envoy#10489 Limited to bootstrap clusters for now Also cleans up whitespace in tests Updates: projectcontour#3564 Updates: projectcontour#3284 Signed-off-by: Sunjay Bhatia <sunjayb@vmware.com>
…3572) Strict/Logical DNS clusters fail to parse ipv6 IPs and according to the Envoy documentation should not use an IP See: https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/service_discovery#arch-overview-service-discovery-types See: envoyproxy/envoy#10489 Limited to bootstrap clusters for now Also cleans up whitespace in tests Updates: #3564 Updates: #3284 Signed-off-by: Sunjay Bhatia <sunjayb@vmware.com>
Ensures we take an ipv6 address w/o brackets which matches the existing flags Updates projectcontour#3284 Signed-off-by: Sunjay Bhatia <sunjayb@vmware.com>
Ensures we take an ipv6 address w/o brackets which matches the existing flags Updates #3284 Signed-off-by: Sunjay Bhatia <sunjayb@vmware.com>
Ratelimit server doesnt work with ipv6 so that integration test is blocked |
The last bits of this are for running a job in CI to run integration tests against an ipv6 only cluster, what do we think about moving this to the 1.16 milestone? |
ratelimit PR to allow server to listen on any address properly: envoyproxy/ratelimit#252 |
to save this for later, this is the diff needed to fully configure kind etc. with ipv6 to run the integration tests
Also when running the integration-tester test suite, the |
Also this diff was needed to get the httpbin fixture to be usable:
|
still waiting for some upstream things (ratelimit CI to push an image) and running e2e tests against an ipv6 cluster might be something we consider doing as part of our expanded testing efforts, removing from 1.16.0 for now |
Have been getting lots of pings from Telco&Cloud customers who would like to run Contour on ipv6 environments, this means both Single Stack and Dual Stack k8s clusters. We need to validate Contour deployments on both Tanzu k8s products as well on DIY cloud platforms. Before identifying the exact set of stipulations for 'Contour support ipv6', the very basic scenario we have in mind is:
We assume the entire k8s cluster is running on ipv6 (assuming worker nodes running ipv6, pods have ipv6 connectivity with each other & to the internet, an ipv6 capable CNI, an ipv6 capable external Loadbalancer), Contour should run as usual and brings traffic into pods running on ipv6 addresses. And the same functionality for pods with dual ipv4 / ipv6 addresses.
timeframe: v1.13 or v1.14 before May
The text was updated successfully, but these errors were encountered: