-
Notifications
You must be signed in to change notification settings - Fork 357
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[IPv6] Extend e2e tests for dual-stack #1192
Conversation
wenyingd
commented
Sep 1, 2020
- Add e2e cases for IPAM allocation in dual-stack.
- Add e2e cases for connectivity, including same Node and different Node cases.
Thanks for your PR. The following commands are available:
|
Codecov Report
@@ Coverage Diff @@
## ipv6 #1192 +/- ##
==========================================
- Coverage 41.88% 40.58% -1.30%
==========================================
Files 76 76
Lines 11291 9823 -1468
==========================================
- Hits 4729 3987 -742
+ Misses 6178 5475 -703
+ Partials 384 361 -23
Flags with carried forward coverage won't be shown. Click here to find out more.
|
/test-ipv6-e2e |
/test-ipv6-e2e |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there is a lot of potential for code unification with the existing tests. It seems that these existing tests just need to be extended to support multiple IPs. Then we can avoid duplicating tests for single stack & dual stack.
For example, the existing podWaitForIP
function could be renamed to podWaitForIPs
and updated to return a slice of IPs. Tests which depend on that function (e.g. connectivity tests) could be updated to support a slice of IPs. In the single stack case, the slice would include a single IP and the tests would be equivalent to their current version. In the dual stack case, we would check connectivity for both address families.
Actually, I don't object to unify some generic functions, but I would prefer to have specific tests for dual-stack and single-stack. It is because dual-stack cases should have some additional precondition requirement on the cluster like if multiple CIDRs are set. As for the connectivity verification in dual-stack cluster, we should check the address family and find the corresponding address, this make the test more complicated than a single-stack case. If we keep the code unique, such check in a single-stack cluster test should not be necessary. |
It seems to me that this is not something that we need to explicitly verify in the tests. For example:
The same test can be used for single-stack and dual-stack. The logic of the test can be as follows:
t.Logf("Ping mesh test between all Pods")
for _, podName1 := range podNames {
for _, podName2 := range podNames {
if podName1 == podName2 {
continue
}
for _, podIP := range podIPs[podName2] {
if err := data.runPingCommandFromTestPod(podName1, podIP, pingCount); err != nil {
t.Errorf("Ping '%s' -> '%s': ERROR (%v)", podName1, podName2, err)
} else {
t.Logf("Ping '%s' -> '%s': OK", podName1, podName2)
}
}
}
} I agree that it makes the test code a bit more complicated if you look at the single-stack case only. But overall, you avoid having duplicate test cases which do pretty much the same thing. Please let me know if I'm missing something. The generic |
I have no strong preference on whether we add new connection test case for dual stack or we choose to modify existing connection tests. But I do think we need a new separate test case for dual stack IPAM IP allocation. |
@antoninbas It should make sense that using generic test code could reduce the duplications. My only concern is that it might break the existing cases if we extend them but not write new test cases. But that risk should be able to avoid if we could control the code quality. Anyway, I would try to extend existing cases to support dual-stack cases. Another detail I want to discuss is, I would prefer to define a new struct as podIPs in e2e (might be used in the return value of |
@wenyingd I'm open to the struct idea. But sometimes a slice is convenient, e.g. in the connectivity tests it may make more sense to loop over the IPs and ensure that we have connectivity to each one. Since these are tests, it's also possible to define a struct like this one: type podIPs struct {
IPStrings []string
IPv4 *net.IPv4
IPv6 *net.IPv6
} The struct is created by |
802a713
to
833a88f
Compare
1d72222
to
38748eb
Compare
test/e2e/proxy_test.go
Outdated
@@ -51,6 +50,9 @@ func proxyEnabled(data *TestData) (bool, error) { | |||
} | |||
|
|||
func TestProxyServiceSessionAffinity(t *testing.T) { | |||
// Todo: add check for IPv6 address after Antrea Proxy is supported in IPv6 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @ksamoray , currently I skip these e2e tests for Antrea Proxy in IPv6 cluster (still runs in IPv4 cluster). After your changes for Service is ready, please remember to change the related to code to ensure the test cases could cover your code.
/test-all |
/test-all |
1. Extend generic functions "podWaitForIP" to return all assigned IPs of a given Pod. 2. Validate each IP address in the cluster's network CIDR 3. Use each valid IP to check connectivity. 4. Use each valid IP to execute tests.
1. Extend generic functions "podWaitForIP" to return all assigned IPs of a given Pod. 2. Validate each IP address in the cluster's network CIDR 3. Use each valid IP to check connectivity. 4. Use each valid IP to execute tests.
1. Extend generic functions "podWaitForIP" to return all assigned IPs of a given Pod. 2. Validate each IP address in the cluster's network CIDR 3. Use each valid IP to check connectivity. 4. Use each valid IP to execute tests.
1. Extend generic functions "podWaitForIP" to return all assigned IPs of a given Pod. 2. Validate each IP address in the cluster's network CIDR 3. Use each valid IP to check connectivity. 4. Use each valid IP to execute tests.
1. Extend generic functions "podWaitForIP" to return all assigned IPs of a given Pod. 2. Validate each IP address in the cluster's network CIDR 3. Use each valid IP to check connectivity. 4. Use each valid IP to execute tests.
1. Extend generic functions "podWaitForIP" to return all assigned IPs of a given Pod. 2. Validate each IP address in the cluster's network CIDR 3. Use each valid IP to check connectivity. 4. Use each valid IP to execute tests.
1. Extend generic functions "podWaitForIP" to return all assigned IPs of a given Pod. 2. Validate each IP address in the cluster's network CIDR 3. Use each valid IP to check connectivity. 4. Use each valid IP to execute tests.
1. Extend generic functions "podWaitForIP" to return all assigned IPs of a given Pod. 2. Validate each IP address in the cluster's network CIDR 3. Use each valid IP to check connectivity. 4. Use each valid IP to execute tests.
1. Extend generic functions "podWaitForIP" to return all assigned IPs of a given Pod. 2. Validate each IP address in the cluster's network CIDR 3. Use each valid IP to check connectivity. 4. Use each valid IP to execute tests.
1. Extend generic functions "podWaitForIP" to return all assigned IPs of a given Pod. 2. Validate each IP address in the cluster's network CIDR 3. Use each valid IP to check connectivity. 4. Use each valid IP to execute tests.
1. Extend generic functions "podWaitForIP" to return all assigned IPs of a given Pod. 2. Validate each IP address in the cluster's network CIDR 3. Use each valid IP to check connectivity. 4. Use each valid IP to execute tests.
1. Extend generic functions "podWaitForIP" to return all assigned IPs of a given Pod. 2. Validate each IP address in the cluster's network CIDR 3. Use each valid IP to check connectivity. 4. Use each valid IP to execute tests.
1. Extend generic functions "podWaitForIP" to return all assigned IPs of a given Pod. 2. Validate each IP address in the cluster's network CIDR 3. Use each valid IP to check connectivity. 4. Use each valid IP to execute tests.
1. Extend generic functions "podWaitForIP" to return all assigned IPs of a given Pod. 2. Validate each IP address in the cluster's network CIDR 3. Use each valid IP to check connectivity. 4. Use each valid IP to execute tests.
1. Extend generic functions "podWaitForIP" to return all assigned IPs of a given Pod. 2. Validate each IP address in the cluster's network CIDR 3. Use each valid IP to check connectivity. 4. Use each valid IP to execute tests.
1. Extend generic functions "podWaitForIP" to return all assigned IPs of a given Pod. 2. Validate each IP address in the cluster's network CIDR 3. Use each valid IP to check connectivity. 4. Use each valid IP to execute tests.
1. Extend generic functions "podWaitForIP" to return all assigned IPs of a given Pod. 2. Validate each IP address in the cluster's network CIDR 3. Use each valid IP to check connectivity. 4. Use each valid IP to execute tests.
1. Extend generic functions "podWaitForIP" to return all assigned IPs of a given Pod. 2. Validate each IP address in the cluster's network CIDR 3. Use each valid IP to check connectivity. 4. Use each valid IP to execute tests.
1. Extend generic functions "podWaitForIP" to return all assigned IPs of a given Pod. 2. Validate each IP address in the cluster's network CIDR 3. Use each valid IP to check connectivity. 4. Use each valid IP to execute tests.
1. Extend generic functions "podWaitForIP" to return all assigned IPs of a given Pod. 2. Validate each IP address in the cluster's network CIDR 3. Use each valid IP to check connectivity. 4. Use each valid IP to execute tests.
1. Extend generic functions "podWaitForIP" to return all assigned IPs of a given Pod. 2. Validate each IP address in the cluster's network CIDR 3. Use each valid IP to check connectivity. 4. Use each valid IP to execute tests.
1. Extend generic functions "podWaitForIP" to return all assigned IPs of a given Pod. 2. Validate each IP address in the cluster's network CIDR 3. Use each valid IP to check connectivity. 4. Use each valid IP to execute tests.
1. Extend generic functions "podWaitForIP" to return all assigned IPs of a given Pod. 2. Validate each IP address in the cluster's network CIDR 3. Use each valid IP to check connectivity. 4. Use each valid IP to execute tests.
1. Extend generic functions "podWaitForIP" to return all assigned IPs of a given Pod. 2. Validate each IP address in the cluster's network CIDR 3. Use each valid IP to check connectivity. 4. Use each valid IP to execute tests.
1. Extend generic functions "podWaitForIP" to return all assigned IPs of a given Pod. 2. Validate each IP address in the cluster's network CIDR 3. Use each valid IP to check connectivity. 4. Use each valid IP to execute tests.
1. Extend generic functions "podWaitForIP" to return all assigned IPs of a given Pod. 2. Validate each IP address in the cluster's network CIDR 3. Use each valid IP to check connectivity. 4. Use each valid IP to execute tests.
* Enable verbose logging through generate-manifest.sh (#1142) This patch enables verbose logging for antrea-agent and antrea-controller when generate manifest. This will help in troubleshooting with increased log level. * Bug in PR#1142 (#1248) manifest generation failing in release mode. Fixed the bug. * [IPv6] Consume Node.Spec.CIDRs to support dual-stack configuration (#971) 1. Consume Node.Spec.CIDRs to support IPv4/IPv6 dual-stack Pod Subnets 2. Change NodeConfig.PodCIDR as a slice 3. Change GatewayConfig.IP as a slice to support multiple addresses for antrea-gw0 4. Change InterfaceConfig.IP as a slice to support multiple address for a Pod * [IPv6] Change openflow pipeline for L2 Pod networking (#1040) 1. Add a new table named IPv6 to handle IPv6 ND Solicitation, ND advertisement and IPv6 Multicast traffic. 2. Add flows in openflow tables (spoofGuardTable, IPv6, conntrackTable, conntrackStateTable, conntrackCommitTable, L2ForwardingOutTable) for handling IPv6 L2 Pod networking. * [IPv6] Change host-local IPAM configuration for IPv6 (#1039) 1. Add new field Ranges in IPAMConfig for allocating both IPv4 and IPv6 addresses. 2. Populate subnet and gateway for both IPv4 range and IPv6 range. * [IPv6] Use separate fields for IPv4 and IPv6 in GatewayConfig (#1111) Replace IP slice in GatewayConfig with separate IPv4 and IPv6 fields. * [IPv6] Implement L3 connectivity for IPv6 traffic (#1011) 1. Use IPv6 in iptables and ipset configuration. 2. Identifiy IPv6 address and configure in OpenFlow. 3. Use Node Internal address for tunnel. * [IPv6] Handle Spec.PodCIDR with IPv6 CIDR (#1151) For IPv6 single stack case, node.Spec.PodCIDR is configured with IPv6 CIDR. This patch handles the case and sets nodeConfig.PodIPv6CIDR with parsed CIDR. * [IPv6] Add support for IPv6 address in antctl and agent's apiserver (#1118) * [IPv6] Add support for IPv6 address in antctl and agent's apiserver 1. Support using IPv6 address in OVS tracing. 2. Support displaying Node's and Pod's IPv6 address in agent apiserver. Co-authored-by: Zhecheng Li <zhechel1@uci.edu> * [IPv6] Support IPv6 in e2e (#1129) * [IPv6] Display dual stack NodeSubnet in antrea-octant-plugin (#1156) NodeSubnet can have two values in dual stack case and this patch enhances octant-plugin to show both subnets. * [IPv6] Handle dual stack NodeSubnet for monitoring CRD (#1182) 1. Rename NodeSubnet to NodeSubnets for AntreaAgentInfo. 2. Make a new string slice for dual stack node subnet instead of appending agentInfo.NodeSubnets directly to avoid duplicate CIDRs. * [IPv6][e2e] Fix testDeletePod (#1193) On a dual-stack cluster, podInterfaces[0].IP returns "[ipv4-address], [ipv6-address]". Current implementation doesn't distingush two. * [IPv6] Collect service CIDR in e2e * [IPv6] Add support for dual-stack when using kube-proxy for Service (#1200) 1. Add a config item for IPv6 Serivce CIDR if using kube-proxy to provide Service functions. 2. Output IPv6 traffic from host gateway if its destination is a Service address. 3. Use ct_mark to identify Service traffic and output the reply packet to the host gateway to ensure the DNAT processing in iptables. * [IPv6] Extend e2e tests for dual-stack (#1192) 1. Extend generic functions "podWaitForIP" to return all assigned IPs of a given Pod. 2. Validate each IP address in the cluster's network CIDR 3. Use each valid IP to check connectivity. 4. Use each valid IP to execute tests. * [IPv6] E2e bug fixes (#1311) 1. No -6 option in busybox nc So, no need to distinguish if it is an IPv6 environment for runNetcatCommandFromTestPod() nc BusyBox v1.31.1 (2019-10-28 18:40:01 UTC) multi-call binary. Usage: nc [OPTIONS] HOST PORT - connect nc [OPTIONS] -l -p PORT [HOST] [PORT] - listen -e PROG Run PROG after connect (must be last) -l Listen mode, for inbound connects -lk With -e, provides persistent server -p PORT Local port -s ADDR Local address -w SEC Timeout for connects and final net reads -i SEC Delay interval for lines sent -n Don't do DNS resolution -u UDP mode -v Verbose -o FILE Hex dump traffic -z Zero-I/O mode (scanning) 2. testCert * IPv6 address should be in "[]" * [IPv6] Fix TestReconcileGatewayRoutesOnStartup failure (#1313) Use "ip -6 route" for IPv6 network. * [IPv6] adjust MTU for IPv6 overhead (#1305) If Antrea MTU is too large in IPv6 environment, large packet with overhead exceeds node MTU cannot be transmitted successfully across nodes. IPv6ExtraOverhead, 20 is from observation of IPv4 and IPv6 packets under same situation. * [IPv6] Fix MTU config (#1317) Use Node's internal address to decide if extra IPv6 overhead is needed. * [IPv6] Skip IPsec e2e test (#1373) * With OVS v2.14.0, IPsec in IPv6 envinronment is not supported. * More user-friendly output for PodIPs. From: Retrieved all Pod IPs: map[test-pod-0-upgp1ung:0xc000708960 test-pod-1-pbva9007:0xc0006ec8a0] To: Retrieved all Pod IPs: map[test-pod-0-mudzj847:IPv6: fd74:ca9b:172:16::4, IP strings: fd74:ca9b:172:16::4 test-pod-1-apcmyd30:IPv6: fd74:ca9b:172:16:1::3c, IP strings: fd74:ca9b:172:16:1::3c] * [IPv6] Add 2 Network Policy tests (#1399) 2 upstream Network Policy tests didn't consider netmask for IPv6, this patch is to add correct tests. When bug is fixed in latest release, these 2 tests can be deleted. Kubernetes PR: kubernetes/kubernetes#93583 2 testcases: https://github.com/kubernetes/kubernetes/blob/v1.20.0-alpha.0/test/e2e/network/network_policy.go#L1365 https://github.com/kubernetes/kubernetes/blob/v1.20.0-alpha.0/test/e2e/network/network_policy.go#L1444 * Skip 2 Network Policy testcases before Network Policy IPv6 is supported (#1460) * [IPv6] Fix after rebasing * format code * fix TestPodTrafficShaping * fix TestIPv6RoutesAndNeighbors * [IPv6] Fix issues (#1496) * unit test * manifest * [IPv6] Skip TestAntctlProxy for IPv6 (#1498) * [IPv6] Add IPv6 support for NetworkPolicy 1. Add enhancement in Antrea Controller and Agent to support NetworkPolicy in IPv6. 2. Optimize test cases to support IPv6 3. Use regex in CRD to validate IPv4 or IPv6 string 4. Add TestEgressToServerInCIDRBlock and TestEgressToServerInCIDRBlockWithException 5. networkpolicy_controller.go: PodIPs includes PodIP * [IPv6] Fix issues * remove Github Actions integration test, Jenkins: jenkins-integration -> Integration tests * go fmt * add FlowProtocl() to interface Flow * remove extra lines when rebasing for an octant commit * TestIPv6RoutesAndNeighbors: routeClient.Initialize Co-authored-by: srikartati <stati@vmware.com> Co-authored-by: Wenying Dong <wenyingd@vmware.com> Co-authored-by: Mengdie Song <songm@vmware.com> Co-authored-by: Zhecheng Li <zhechel1@uci.edu>