New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ipam/multipool: Fix bug where allocator was unable to update CiliumNode #27963
Merged
gandro
merged 2 commits into
cilium:main
from
gandro:pr/gandro/multi-pool-fix-operator-desync
Sep 7, 2023
Merged
ipam/multipool: Fix bug where allocator was unable to update CiliumNode #27963
gandro
merged 2 commits into
cilium:main
from
gandro:pr/gandro/multi-pool-fix-operator-desync
Sep 7, 2023
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The IPAM allocator must not allocate new CIDRs before restoration has finished, i.e. before the K8s CiliumNode cache has synced and we have observed all nodes. Before this commit, we already started a controller attempting to allocate new CIDRs even if the allocator was not yet ready. This led to the controller being run unnecessarily, as it cannot succeed until `Resync` is called. Creating a controller before `Resync` is called is also not needed, because `Resync` itself (re-)creates a controller for each pending node. Therefore, this commit changes the logic to not start any controller before `restoreFinished` is true, as the controller will be created by `Resync` once everything is ready. Signed-off-by: Sebastian Wicki <sebastian@isovalent.com>
gandro
added
release-note/bug
This PR fixes an issue in a previous release of Cilium.
area/ipam
Impacts IP address management functionality.
needs-backport/1.14
This PR / issue needs backporting to the v1.14 branch
labels
Sep 6, 2023
This commit fixes an issue where the multi-pool allocator was unable to update a CiliumNode resource because of concurrent writes. This manifested in the following error being emitted repeatedly: ``` level=debug msg="Controller run failed" consecutiveErrors=48 error="failed to update spec: Operation cannot be fulfilled on ciliumnodes.cilium.io \"kind-worker\": the object has been modified; please apply your changes to the latest version and try again" name=ipam-multi-pool-sync-kind-worker subsys=controller uuid=12ba9a52-d36f-48fe-a7b7-3cf97c2cdb26 ``` This would happen because the operator CiliumNode watcher does not call the `Upsert` function if only the metadata (e.g. resource version, labels, annotations, etc) of a node changes. This meant that the allocator was working with a stale `resourceVersion` of the CiliumNode object, causing any updates to fail until `Upsert` would be called again because some non-metadata field changed. This commit fixes that issue by having the controller fetch the most recent version of the `CiliumNode` if the Kubernetes API reports that there have been concurrent changes. This behavior matches the behavior of the cluster-pool and ENI/Azure/AlibabaCloud implementation, which already correctly fetched the resource again upon conflicts. In addition, this commit also adds a unit test to test this new behavior. Signed-off-by: Sebastian Wicki <sebastian@isovalent.com>
gandro
force-pushed
the
pr/gandro/multi-pool-fix-operator-desync
branch
from
September 6, 2023 12:03
f5ffacc
to
0c69aaf
Compare
/test Edit:
|
tklauser
approved these changes
Sep 6, 2023
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice find. Thanks for adding a unit test as well!
tommyp1ckles
approved these changes
Sep 7, 2023
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice fix, changes look good on my end
maintainer-s-little-helper
bot
added
the
ready-to-merge
This PR has passed all tests and received consensus from code owners to merge.
label
Sep 7, 2023
gandro
added
backport-pending/1.14
The backport for Cilium 1.14.x for this PR is in progress.
and removed
needs-backport/1.14
This PR / issue needs backporting to the v1.14 branch
labels
Sep 12, 2023
gandro
added
backport-done/1.14
The backport for Cilium 1.14.x for this PR is done.
and removed
backport-pending/1.14
The backport for Cilium 1.14.x for this PR is in progress.
labels
Sep 25, 2023
jrajahalme
moved this from Needs backport from main
to Backport done to v1.14
in 1.14.3
Oct 18, 2023
2 tasks
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
area/ipam
Impacts IP address management functionality.
backport-done/1.14
The backport for Cilium 1.14.x for this PR is done.
ready-to-merge
This PR has passed all tests and received consensus from code owners to merge.
release-note/bug
This PR fixes an issue in a previous release of Cilium.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This commit fixes an issue where the multi-pool allocator was unable to
update a CiliumNode resource because of concurrent writes. This
manifested in the following error being emitted repeatedly:
This would happen because the operator CiliumNode watcher does not call
the
Upsert
function if only the metadata (e.g. resource version,labels, annotations, etc) of a node changes. This meant that the
allocator was working with a stale
resourceVersion
of the CiliumNodeobject, causing any updates to fail until
Upsert
would be called againbecause some non-metadata field changed.
This commit fixes that issue by having the controller fetch the most
recent version of the
CiliumNode
if the Kubernetes API reports thatthere have been concurrent changes. This behavior matches the behavior
of the cluster-pool and ENI/Azure/AlibabaCloud implementation, which
already correctly fetched the resource again upon conflicts.
In addition, this commit also adds a unit test to test this new
behavior.