Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test: Fix fragment tracking test under KUBEPROXY=1 #11098

Merged
merged 2 commits into from Apr 27, 2020

Conversation

pchaigno
Copy link
Member

@pchaigno pchaigno commented Apr 22, 2020

First commit simplifies fragment tracking test to match on 3-tuple instead of 4-tuple. The second commit fixes the test to support KUBEPROXY=1. See commit messages for details.

Fixes: #10929
/cc @qmonnet

@pchaigno pchaigno added kind/bug/CI This is a bug in the testing code. area/CI Continuous Integration testing issue or flake release-note/misc This PR makes changes that have no direct user impact. labels Apr 22, 2020
@maintainer-s-little-helper maintainer-s-little-helper bot added this to In progress in 1.8.0 Apr 22, 2020
@pchaigno
Copy link
Member Author

test-me-please

@coveralls
Copy link

coveralls commented Apr 22, 2020

Coverage Status

Coverage decreased (-0.003%) to 44.749% when pulling e7c48a2 on pr/pchaigno/fix-fragment-test-with-kubeproxy into 89b622c on master.

Copy link
Member

@qmonnet qmonnet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot! This looks great and looks correct in regard to my understanding of the issue.

The test does become more complicated to read, but I see no ideal solution. We could decouple the test function for the kubeproxy/kube-free cases, but we would end up with something longer in total and with a lot of duplicated code. Still, a few suggestions inline.

Could you please explain what you mean about the fromNode == helpers.K8s2 case? I didn't get your point :/.

test/k8sT/Services.go Outdated Show resolved Hide resolved
test/k8sT/Services.go Outdated Show resolved Hide resolved
test/k8sT/Services.go Show resolved Hide resolved
test/k8sT/Services.go Outdated Show resolved Hide resolved
@pchaigno
Copy link
Member Author

Thanks for taking a look!

The test does become more complicated to read, but I see no ideal solution. We could decouple the test function for the kubeproxy/kube-free cases, but we would end up with something longer in total and with a lot of duplicated code.

What about:

Maybe matching on the source port without the source IP address
would be enough?

Do we really risk having flakes if we match on the 3-tuple instead of the 4-tuple? I've had tcpdump running for a long time during those tests and never saw anything else using UDP port 12345. I think it would simplify the test a lot.

@qmonnet
Copy link
Member

qmonnet commented Apr 23, 2020

Do we really risk having flakes if we match on the 3-tuple instead of the 4-tuple?

Probably not. I used as much as I could for a maximum of caution, but it sounds very unlikely that we get a duplicate flow by matching just on the 3-tuple. That's a good idea.

The fragment tracking test sends a couple fragments to a K8s service and
checks the ctmap content to ensure they were properly sent and received.
It used to filter based on the 4-tuple. That is however making the test
convoluted when adding support for kube-proxy to the test (new commit).

This commit therefore switches to filtering based on the 3-tuple (source
port, destination IP and port). Support for sending from k8s2 is also
removed since it wasn't used.

Signed-off-by: Paul Chaignon <paul@cilium.io>
The fragment tracking test currently fails when kube-proxy is enabled
because the destination IP address and port are sometimes not DNATed.
The awk filter on the ctmap dump output fails in that case.

This commit fixes this issue by checking if kube-proxy is enabled and
adapting the destination on which to match as a result.

Fixes: #10929
Signed-off-by: Paul Chaignon <paul@cilium.io>
@pchaigno pchaigno force-pushed the pr/pchaigno/fix-fragment-test-with-kubeproxy branch from acd9ae4 to e7c48a2 Compare April 24, 2020 14:19
@pchaigno pchaigno changed the title WIP: test: Fix fragment tracking test under KUBEPROXY=1 test: Fix fragment tracking test under KUBEPROXY=1 Apr 24, 2020
@pchaigno pchaigno marked this pull request as ready for review April 24, 2020 17:35
@pchaigno pchaigno requested a review from a team as a code owner April 24, 2020 17:35
@pchaigno pchaigno requested a review from qmonnet April 24, 2020 17:35
@pchaigno
Copy link
Member Author

I ran the fragment tracking test locally, 29 times with kube-proxy and 160 times without kube-proxy. Haven't seen any flakes, so matching on 3-tuple is probably safe. We could have issues once we start running the test in the context of the test suites though.

@qmonnet
Copy link
Member

qmonnet commented Apr 24, 2020

We look at the number of packets right before and right after sending the datagram, so to have a conflict not only would we need to have another flow with the same IP and ports which is not very likely, but it would also have to send frames within a very short window. I think it's safe to match on the 3-tuple.

Copy link
Member

@qmonnet qmonnet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great, thanks a lot for debugging and patching this!

@aanm aanm merged commit 0e772e7 into master Apr 27, 2020
1.8.0 automation moved this from In progress to Merged Apr 27, 2020
@aanm aanm deleted the pr/pchaigno/fix-fragment-test-with-kubeproxy branch April 27, 2020 11:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/CI Continuous Integration testing issue or flake kind/bug/CI This is a bug in the testing code. release-note/misc This PR makes changes that have no direct user impact.
Projects
No open projects
1.8.0
  
Merged
Development

Successfully merging this pull request may close these issues.

CI: K8sServicesTest Checks service across nodes Supports IPv4 Fragments: Failed to account for IPv4 fragments
4 participants