-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v1.11 backports 2023-04-12 #24823
v1.11 backports 2023-04-12 #24823
Conversation
[ upstream commit 4aa6911 ] If we can read "procfs" the user will not the reason for it. We should log the error as well. Signed-off-by: André Martins <andre@cilium.io> Signed-off-by: Paul Chaignon <paul@cilium.io>
[ upstream commit 2f9850c ] The upgradeCompatability should always be set to the first version that the user installed in order to assume the Helm defaults that were in place during that release. Tracking each version here initially would provide confirmation for users in order to pick a valid version. Except that we forgot to keep it up to date with each release. Drop the examples to reduce user confusion. Signed-off-by: Joe Stringer <joe@cilium.io> Signed-off-by: Paul Chaignon <paul@cilium.io>
[ upstream commit e773f7e ] Following up on cilium#23334, add more exceptions for errors that seem to not be related to Cilium but rather to etcd. Fixes: cilium#24701 Suggested-by: André Martins <andre@cilium.io> Signed-off-by: Gilberto Bertin <jibi@cilium.io> Signed-off-by: Paul Chaignon <paul@cilium.io>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the backport of the commit I authored 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggested changes to resolve the linter errors. If it's tedious for you, I can push to your branch. Thanks!
[ upstream commit 89a1936 ] The restore code attempts to reconcile datapath state with the userspace state post agent restart. Bailing out early on failures prevents any remediation from happening, so log any errors. Follow-up commits will try to handle leaked backends in the cluster if any. Signed-off-by: Aditi Ghag <aditi@cilium.io> Signed-off-by: Paul Chaignon <paul@cilium.io>
[ upstream commit ebe2b55 ] The restore logic attempts to reconcile datapath state with the userspace post agent restart. Previously, it first restored backends from the `lb4_backends` map before restoring service entries from the `lb4_services` map. If there were error scenarios prior to agent restart (for example, backend map full because of leaked backends), the logic would fail to restore backends currently referenced in the services map (and as a result, selected for load-balancing traffic). This commit prioritizes restoring service entries followed by backend entries. Follow-up commit handles error cases such as leaked backends by keeping track of backends retrieved from restoration of service entries, and then using that to subsequently restore backends. Signed-off-by: Aditi Ghag <aditi@cilium.io> Signed-off-by: Paul Chaignon <paul@cilium.io>
27ebad1
to
5994b9b
Compare
[ upstream commit 5311f81 ] In certain error scenarios, backends can be leaked, where they were deleted from the userspace state, but left in the datapath backends map. To reconcile datapath and userspace, identify such backends that were created with different IDs but same L3n4Addr hash. This commit builds up on previous commits that don't bail out on such error conditions (e.g., backend IDs mismatch during restore), and tracks backends that are currently referenced in service entries restored from the lb4_services map to restore backend entries. Furthermore, it uses the tracked state to delete any duplicate backends that were previously leaked. Fixes: b79a4a5 (pkg/service: Gracefully terminate service backends) Signed-off-by: Aditi Ghag <aditi@cilium.io> Signed-off-by: Paul Chaignon <paul@cilium.io>
[ upstream commit 92407a8 ] Today we always compile a .asm files for endpoints, even though we rarely use them. They take a lot of space in the sysdumps and increase the overall compile time. This commit changes it to only compile those files if debugging mode is enabled. Reported-by: Sebastian Wicki <sebastian@isovalent.com> Signed-off-by: Paul Chaignon <paul@cilium.io>
[ upstream commit e802c29 ] These wildcard variables will be used by a later commit in the IPsec logic. Signed-off-by: Paul Chaignon <paul@cilium.io>
[ upstream commit ddd491b ] UpsertIPsecEndpoint is currently unable to replace stale XFRM states. We use XfrmStateAdd, which fails with EEXIST if a state with the same key (IPs, SPI, and mark) already exists. We can't use XfrmStateUpdate because it fails with ESRCH is no state with the specified key exist. Note we don't have the same issue for XFRM policies because XfrmPolicyUpdate doesn't return ESRCH if no such policy already exists. No idea why the two APIs are not consistent. We therefore need to implement a proper 'update or insert' logic for XFRM states ourselves. To that end, we first check if the state we want to add already exists. If it doesn't, we attempt to add it. If it fails with EEXIST, we know that some other state is conflicting. In that case, we attempt to remove any conflicting XFRM states that are found and then attempt to add the new state again. To find conflicting XFRM states, we use the same logic as the kernel does (cf. __xfrm_state_lookup). Signed-off-by: Paul Chaignon <paul@cilium.io>
[ upstream commit 7d44f37 ] This commit adds a catch-all XFRM policy for outgoing traffic that has the encryption bit. The goal here is to catch any traffic that may passthrough our encryption while we are replacing XFRM policies & states. Those operations cannot always be performed atomically so we may have brief moments where there is no XFRM policy to encrypt a subset of traffic. This policy ensures we drop such traffic and don't let it flow in plain text. We do need to match on the mark because there is also traffic flowing through XFRM that we don't want to encrypt (e.g., hostns traffic). Signed-off-by: Paul Chaignon <paul@cilium.io>
[ upstream commit 688dc9a ] We recently changed our XFRM states and policies (IPs and marks). We however failed to remove the stale XFRM states and policies and it turns out that they conflict (e.g., the kernel ends up picking the stale policies for encryption instead of the new one). This commit therefore cleans up those stale XFRM states and policies. We can identify them based on mark values and masks (we switched from 0xFF00 to 0XFFFFFF00). The new XFRM states and policies are added as we receive the information on remote nodes. By removing the stale states and policies before the new ones are installed for all nodes, we could cause plain-text traffic on egress and packet drops on ingress. To ensure we never let plain-text traffic out, we will clean up the stale config only once the catch-all default-drop policy is installed. In that way, if there is a brief moment where, for a connection nodeA -> nodeB, we don't have a policy, traffic will be dropped instead of sent in plain-text. For each connection nodeA -> nodeB, those packet drops on egress and ingress of nodeA will happen between the time we replace the BPF datapath and the time we've installed the new XFRM state and policy corresponding to nodeB. Waiting longer to remove the stale states and policies doesn't impact the drops as they will keep happening until the new states and policies are installed. This is all happening on agent startup, as soon as we have the necessary information from k8s. Signed-off-by: Paul Chaignon <paul@cilium.io>
5994b9b
to
05b6f2d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot!
I've run additional checks against my two PRs to confirm they are still implementing the expected behavior. All looks good. |
Two of the 4.9 Jenkins pipelines failed with the following error:
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.17-kernel-4.9/1284/testReport/junit/Suite-k8s-1/17/K8sPolicyTest_Basic_Test_checks_all_kind_of_Kubernetes_policies/ This is also an issue on the other v1.11 backport PR: #24852 (comment) Edit: This seems to be an occurrence of #24394 |
|
/ci-aks-1.11 |
/test-1.16-netnext |
This means that all three red test are unrelated failures. Merging this PR. |
.asm
files by default #24769 -- loader: Don't compile.asm
files by default (@pchaigno)Once this PR is merged, you can update the PR labels via: