-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v1.10 backports 2022-04-19 #19482
v1.10 backports 2022-04-19 #19482
Conversation
/test-backport-1.10 |
/test-gke Looks like CI images were not ready. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My changes look good to me. Thanks!
/test-1.17-4.9 |
/test-1.18-4.9 |
/test-1.19-4.9 |
/test-1.21-4.9 |
1 similar comment
/test-1.21-4.9 |
/test-1.19-4.9 |
/test-1.17-4.9 |
Looks like we hit some BPF regeneration issue on all 4.9 jobs, e.g. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.17-kernel-4.9/601/:
Unfortunately artifacts containing the agent logs don't seem to be available on any of the jobs. I assume this is caused by the backport of #19308, so I'm dropping that commit to verify. |
[ upstream commit cfec27a ] This commit increases the VM boot timeout while decreasing the overall timeout :mindblown: We currently run the vagrant-ci-start.sh script with a 15m timeout and retry twice if it fails. That takes up to 45m in total if all attempts fail, as in frequently happening in CI right now. In particular, if the script simply fails because it's taking on average more than 15m then it is likely to fail all three times. This commit instead increases the timeout from 15m to 25m and removes the retries. The goal is obviously to succeed on the first try :p Ideally, we would investigate why it is now taking longer to start the VM. But this issue has been happening for a long time. And because of the retries, we probably didn't even notice the increase at the beginning: if it takes on average 15min, it might fail half the time and the test might still succeed most of the time. That is, the retries participate to hide the increase. Signed-off-by: Paul Chaignon <paul@cilium.io> Signed-off-by: Tobias Klauser <tobias@cilium.io>
[ upstream commit 29c3ebd ] Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Tobias Klauser <tobias@cilium.io>
[ upstream commit b7002f5 ] Use the correct terminology ('session affinity' vs 'service affinity'), and fix a typo. Signed-off-by: Julian Wiedmann <jwi@isovalent.com> Signed-off-by: Tobias Klauser <tobias@cilium.io>
[ upstream commit 6c34c93 ] Improve error logs thrown by port validation logic so that user can take necessary actions. Signed-off-by: Aditi Ghag <aditi@cilium.io> Signed-off-by: Tobias Klauser <tobias@cilium.io>
[ upstream commit 79d53af ] Local redirect policy requires Kube-proxy replacement, and the feature flag to be enabled. Rename the section that outlines these steps so that users are less likely to miss them. Suggested-by: Raymond de Jong <raymond.dejong@isovalent.com> Signed-off-by: Aditi Ghag <aditi@cilium.io> Signed-off-by: Tobias Klauser <tobias@cilium.io>
2fded9c
to
7e4f324
Compare
/test-backport-1.10 |
/test-1.20-4.19 VM provisioning failed: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.19/1747/ |
pull skb data at the entrance of from-containter #19308 -- pull skb data at the entrance of from-containter (@liuyuan10)pull skb data at the entrance of from-containter #19308 (comment)
Once this PR is merged, you can update the PR labels via: