Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cache cluster topologies #374

Open
msau42 opened this issue Oct 26, 2019 · 6 comments
Open

Cache cluster topologies #374

msau42 opened this issue Oct 26, 2019 · 6 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@msau42
Copy link
Collaborator

msau42 commented Oct 26, 2019

Once we move CSINode and Node to informers, we could potentially cache the topologies available in the cluster instead of having to List() and iterate through them every time we provision.

/kind feature

@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Oct 26, 2019
@mucahitkurt
Copy link
Contributor

@msau42 I think this refers to these lines of codes?

If I understand correctly, we would like to cache topology key/value pairs for a cluster, and they are all node labels, so we should cache all node labels and then select according to requested topology keys.

This will make unnecessary to extract requested topology keys from nodes that have at least one requested topology key, we may able to select values of requested topology keys from the informer with one list method call.

But I'm not sure about benefits vs complexity tradeoff since we already use the informer for Nodes.

@msau42
Copy link
Collaborator Author

msau42 commented Nov 26, 2019

Yes, basically anytime we have to list, like here also.

The performance benefit should theoretically show up when: 1) you have a lot of nodes and 2) you have very few topologies compared to nodes (like zones). But agree, it may be good to write a performance benchmark test first so we can compare the performance when you have to List() nodes every time you provision vs listing topologies from a cache

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 25, 2020
@mucahitkurt
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 25, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 25, 2020
@msau42
Copy link
Collaborator Author

msau42 commented Jun 3, 2020

/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 3, 2020
@msau42 msau42 mentioned this issue Aug 14, 2020
7 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests

4 participants