Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

馃尡 controller/machine: use unstructured caching client #8896

Merged

Conversation

sbueringer
Copy link
Member

@sbueringer sbueringer commented Jun 21, 2023

Signed-off-by: Stefan B眉ringer buringerst@vmware.com

What this PR does / why we need it:

tl;dr

I think we can cache all our get calls with unstructured in the Machine controller. This leads to huge performance improvements at scale (an average Machine reconcile goes from ~ 1 second to double digits milliseconds).

Most of this PR is just updating the tests.

Why do I think it's safe to cache the unstructured gets?

While it could happen that a Machine reconcile is seeing stale BootstrapConfigs / InfraMachines, based on experiments and upstream documentation for every update on BootstrapConfig / InfraMachine we will get another Machine reconcile. During that reconcile the BootstrapConfig / InfraMachine will have been already updated in the cache.

This is the case because Kubernetes informers always first update the cache before they notify event handlers. (Event handlers eventually enqueue a reconcile request for our Machines)

client-go-controller-interaction
(Source: https://github.com/kubernetes/sample-controller/blob/master/docs/controller-client-go.md)

In this diagram it can be seen that once an event is received in 1), the informer always first updates the cache in 5) before triggering the event handler in 6). A controller-runtime controller reconciles roughly after 8).

Please note that there is no way to guarantee that a Machine reconcile will always use a 100% up-to-date BootstrapConfig / InfraMachine as it's impossible to guarantee that the reconciler gets the objects which might have been written during the Machine reconcile - even with a live client.

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Related #8814

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Jun 21, 2023
@sbueringer sbueringer mentioned this pull request Jun 21, 2023
27 tasks
@sbueringer
Copy link
Member Author

Copy link
Contributor

@killianmuldoon killianmuldoon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Q: does this apply to essentially all of our live client reads? Or are there cases where the client-go cache doesn't follow this behaviour?

I'm near certain there's been bugs in the past that have been solved by moving to the live client, but I don't understand what the difference with these calls is.

@sbueringer
Copy link
Member Author

sbueringer commented Jun 21, 2023

Q: does this apply to essentially all of our live client reads? Or are there cases where the client-go cache doesn't follow this behaviour?

I'm near certain there's been bugs in the past that have been solved by moving to the live client, but I don't understand what the difference with these calls is.

I think the difference is if we can tolerate a stale read or not. There are cases where we don't want to tolerate it, e.g. if we create duplicate MachineDeployments because we didn't see the one we created before. Basically because we don't want to implement logic which tries to rollback what went wrong before.

I think in the Machine case it's fine to just continue reconciling until we eventually get the "final" BootstrapConfig / InfraMachine (btw I run tests and even with hundreds of clusters and thousands of machines I barely ran into the case where the read from the cache was stale, just cases where concurrently during Machine reconcile the BootstrapConfig / InfraMachine were updated).

It's also crucial that the Machine controller watches BootstrapConfig / InfraMachine. This will guarantee that we get another reconcile with up-to-date BootstrapConfig / InfraMachine. This is also not always the case.

So I think it comes down to a case-by-case decision.

(Happy to look at specific previous bugs, I got the same feeling that you have, but based on debugging through client-go, running experiments with logs and the documentation I'm pretty sure that this is just how informers behave)

To make another example. I think we had cases where e.g. the KCP controller was reading a stale KCP object. But I think the question is how stale was that object really. Or was it that after the reconcile started a new update on the KCP object came in (which of course then also lead to a subsequent reconcile).

Some more context about the Machine controller specifically. Even after the BootstrapConfig / InfraMachine stabilize we still get a few additional reconciles on the Machine object (probably because of updates on the Node). Additionally we also have the 10m resyncPeriod.

@sbueringer
Copy link
Member Author

sbueringer commented Jun 21, 2023

I'll take a look at the e2e tests.

EDIT: Ups forgot to also add the changes to the main.go :)

@richardcase
Copy link
Member

This is great @sbueringer . In our scale testing, we are seeing the read performance of Machine as one of the biggest bottlenecks. Above about 500 clusters, we see that the read calls start exceeding the SLO of 1s. So if this change brings it down to less than 100ms that is fantastic 馃帀

@sbueringer
Copy link
Member Author

/test pull-cluster-api-e2e-full-main

@fabriziopandini
Copy link
Member

fabriziopandini commented Jun 22, 2023

/lgtm

I think it is ok to use cached read for external objects in the machine controller, given that as explained above we are watching for those objects and the machine controller will reconcile again as soon as boostrapConfig or InfraMachine will change (also machine controller is just waiting for provision to happen, so no harm will be done if we wait for the next reconcile)

Looking at this from another perspective, I would say the current implementation is more due to the limitations of the cache (or of the cache options) at the time we first implemented this code; over time not only controller runtime has been improved a lot (kudos to the team), but we have a better and deep understanding of how all this work and we can now make changes like this in a very surgical way.

Note: eventually, in a follow up we can optimize the memory footprint of the cache for boostrapConfig and InfraMachine by keeping in memory only the subset of object fields that are defined by the contract and dropping everything else.

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jun 22, 2023
@k8s-ci-robot
Copy link
Contributor

LGTM label has been added.

Git tree hash: ec426561a3c74aa88db7ea7e9a5bc8eac73365af

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 23, 2023
Signed-off-by: Stefan B眉ringer buringerst@vmware.com
@k8s-ci-robot k8s-ci-robot removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jun 23, 2023
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 23, 2023
Copy link
Member

@fabriziopandini fabriziopandini left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jun 23, 2023
@k8s-ci-robot
Copy link
Contributor

LGTM label has been added.

Git tree hash: 8270820dbfd9feb96ee32340f354df4174d45875

@fabriziopandini
Copy link
Member

/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: fabriziopandini

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jun 26, 2023
@k8s-ci-robot k8s-ci-robot merged commit 1d61dd9 into kubernetes-sigs:main Jun 26, 2023
19 checks passed
@k8s-ci-robot k8s-ci-robot added this to the v1.5 milestone Jun 26, 2023
@sbueringer sbueringer deleted the pr-machine-ctrl-uncached branch June 26, 2023 10:53
@killianmuldoon
Copy link
Contributor

/area machine

@k8s-ci-robot k8s-ci-robot added the area/machine Issues or PRs related to machine lifecycle management label Jun 27, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/machine Issues or PRs related to machine lifecycle management cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants