Skip to content

Commit

Permalink
gha: additionally cover BPF masquerade in clustermesh E2E tests
Browse files Browse the repository at this point in the history
[ upstream commit a1089a7 ]

[ backporter's notes: we keep masquerade set to false on upgrade tests
  for 1.14 due to limitations outlined in
  #14350. However we still
  backport the rest of the changes as regular non-upgrade tests still
  benefit from it. ]

Currently, BPF masquerade was always disabled in the clustermesh
E2E tests due to unintended interactions with Docker iptables
rules breaking DNS resolution [1]. Instead, let's explicitly
configure external upstream DNS servers for coredns, so that we
can also enable this feature when KPR is enabled.

While being there, let's also make the KPR setting explicit,
instead of relying on the Cilium CLI configuration (which is based
on whether the kube-proxy daemonset is present or not).

[1]: #23283

Signed-off-by: Marco Iorio <marco.iorio@isovalent.com>
Signed-off-by: Nicolas Busseneau <nicolas@isovalent.com>
  • Loading branch information
giorio94 authored and julianwiedmann committed Feb 9, 2024
1 parent 62f2d16 commit 0fa179c
Show file tree
Hide file tree
Showing 2 changed files with 24 additions and 3 deletions.
5 changes: 3 additions & 2 deletions .github/workflows/conformance-clustermesh.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -211,7 +211,8 @@ jobs:
--helm-set=operator.image.suffix=-ci \
--helm-set=operator.image.tag=${SHA} \
--helm-set=operator.image.useDigest=false \
--helm-set=bpf.masquerade=false \
--helm-set=kubeProxyReplacement=${{ matrix.kube-proxy == 'none' }} \
--helm-set=bpf.masquerade=${{ matrix.kube-proxy == 'none' }} \
--helm-set=bpf.monitorAggregation=none \
--helm-set=hubble.enabled=true \
--helm-set=hubble.relay.enabled=true \
Expand Down Expand Up @@ -345,8 +346,8 @@ jobs:
# Make sure that coredns uses IPv4-only upstream DNS servers also in case of clusters
# with IP family dual, since IPv6 ones are not reachable and cause spurious failures.
# Additionally, this is also required to workaround #23283.
- name: Configure the coredns nameservers
if: matrix.ipfamily == 'dual'
run: |
COREDNS_PATCH="
spec:
Expand Down
22 changes: 21 additions & 1 deletion .github/workflows/tests-clustermesh-upgrade.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ jobs:
--set=clustermesh.apiserver.kvstoremesh.image.override=quay.io/${{ env.QUAY_ORGANIZATION_DEV }}/kvstoremesh-ci:${SHA} \
"
# * bpf.masquerade is disabled due to #23283
# * bpf.masquerade is disabled due to https://github.com/cilium/cilium/issues/14350
# * Hubble is disabled to avoid the performance penalty in the testing
# environment due to the relatively high traffic load.
# * We enable the clustermesh-apiserver (although with zero replicas)
Expand All @@ -139,6 +139,7 @@ jobs:
--set=ipv6.enabled=true \
--set=clustermesh.useAPIServer=true \
--set=clustermesh.apiserver.replicas=${{ matrix.external-kvstore && '0' || '1' }} \
--set=kubeProxyReplacement=${{ matrix.kube-proxy == 'none' }} \
--set=clustermesh.config.enabled=true"
# Run only a limited subset of tests to reduce the amount of time
Expand Down Expand Up @@ -210,6 +211,25 @@ jobs:
config: ./.github/kind-config-cluster2.yaml
wait: 0 # The control-plane never becomes ready, since no CNI is present

# Make sure that coredns uses IPv4-only upstream DNS servers also in case of clusters
# with IP family dual, since IPv6 ones are not reachable and cause spurious failures.
# Additionally, this is also required to workaround #23283.
- name: Configure the coredns nameservers
run: |
COREDNS_PATCH="
spec:
template:
spec:
dnsPolicy: None
dnsConfig:
nameservers:
- 8.8.4.4
- 8.8.8.8
"
kubectl --context ${{ env.contextName1 }} patch deployment -n kube-system coredns --patch="$COREDNS_PATCH"
kubectl --context ${{ env.contextName2 }} patch deployment -n kube-system coredns --patch="$COREDNS_PATCH"
- name: Create the IPSec secret in both clusters
if: matrix.encryption == 'ipsec'
run: |
Expand Down

0 comments on commit 0fa179c

Please sign in to comment.