Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP][PoC] cacher: Attempt serving paginated LIST calls from watchCache #108392

Closed

Conversation

MadhavJivrajani
Copy link
Contributor

What type of PR is this?

What this PR does / why we need it:

Which issue(s) this PR fixes:

Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?


Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


/sig api-machinery scalability
/assign @wojtek-t
/cc @nikhita
Related to #108003

@k8s-ci-robot
Copy link
Contributor

@MadhavJivrajani: Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. sig/scalability Categorizes an issue or PR as relevant to SIG Scalability. labels Feb 28, 2022
@k8s-ci-robot k8s-ci-robot requested a review from nikhita February 28, 2022 16:47
@k8s-ci-robot k8s-ci-robot added do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Feb 28, 2022
@k8s-ci-robot
Copy link
Contributor

@MadhavJivrajani: Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added do-not-merge/needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. and removed do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Feb 28, 2022
@MadhavJivrajani MadhavJivrajani marked this pull request as draft February 28, 2022 16:47
@MadhavJivrajani
Copy link
Contributor Author

/skip

@k8s-ci-robot k8s-ci-robot added area/apiserver area/dependency Issues or PRs related to dependency changes labels Feb 28, 2022
@MadhavJivrajani
Copy link
Contributor Author

MadhavJivrajani commented Feb 28, 2022

@wojtek-t for now only the btree backed cache is implemented for storeElements, we'll work on making the store btree backed in watchCache next (without inplemting indexes for the initial PoC as discussed) followed by the actual pagination logic.
I opened a draft PR mainly to try and get early feedback on the current btree approach.

@MadhavJivrajani MadhavJivrajani changed the title [WIP][PoC] cacher: Implement a btree backed cache of storeElements [WIP][PoC] cacher: Attempt serving paginated LIST calls from watchCache Feb 28, 2022
@wojtek-t
Copy link
Member

wojtek-t commented Mar 1, 2022

/retest

@fedebongio
Copy link
Contributor

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Mar 3, 2022
return nil
}

// TODO(MadhavJivrajani): This is un-implemented for now. Stubbing this out
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general you would need to reimplement something similar to https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/client-go/tools/cache/thread_safe_store.go#L135-L323

However - our case is somewhat special, as there is exactly one index defined and only for pods.
So we can probably shortcut it and:

  • implement just a single index using a btree [it will be btree with key="index value" and value="set of keys matching that value"]
  • use that here (we can have a strict validation that will fail creation if there is more than 1 indexer)
    [we can eventually generalize it, but it's not needed now]

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually thought about that and in the KEP I proposed to ignored indexes in general:
kubernetes/enhancements#3274

PTAL

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack
Thanks @wojtek-t! Will get back to this post release :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I described here:
https://github.com/kubernetes/enhancements/pull/3274/files#diff-3d93eb20b3400b8a937e7ab25e0aca359f73d8600271605df260b3b83bad06a7R301-R303

let make that very simple:
(1) if we're at "now" - let's have the index implemented the same way as we have now:
https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/client-go/tools/cache/thread_safe_store.go#L70
[so basically something like:

store.lock()
current := store.btree.Clone()
indexed := store.index[myvalue] (probably deep-copied, depending on the implementation)
store.unlock()

// procee the request using "current" and "indexed"

(2) let's ignore that for continuation
[index is just an optimization, and for continuation, we can just go ahead and process all items

@MadhavJivrajani
Copy link
Contributor Author

@wojtek-t 👋🏼

I am having some difficulty with the implementation:

  • Do we replicate (more or less) the limit/continue implementation that exists here:
    func (s *store) GetList(ctx context.Context, key string, opts storage.ListOptions, listObj runtime.Object) error {
  • After trying to implement it, I ended up with this function to do a read from the btree while honouring the limit:
func (t *btreeStore) LimitPrefixRead(limit int64, key string) []interface{} {
	t.lock.RLock()
	defer t.lock.Unlock()

	var result []interface{}
	var elementsRetrieved int64
	t.tree.AscendGreaterOrEqual(&storeElement{Key: key}, func(i btree.Item) bool {
		elementKey := i.(*storeElement).Key
		if elementsRetrieved == limit {
			return false
		}
		if !strings.HasPrefix(elementKey, key) {
			return false
		}
		elementsRetrieved++
		result = append(result, i.(interface{}))
		return true
	})

	return result
}

and I call it as

treeClone := cloneOrGetFromContinuationCache()
treeClone.LimitPrefixRead(listOpts.Predicate.Limit, somehowDecodedKeyFromContinueToken)

Done so far:

  • Implemented cache.Store using btree
    • More specifically:
type btreeIndexer interface {
	cache.Store
	Clone() btreeIndexer
	LimitPrefixRead(limit int64, key string) []interface{}
}
  • Removed indexer dependent logic
  • Cache for caching btree roots for serving continuation requests
  • Change condition for delegating list to storage
    • Update condition in list work estimation (TODO)

Will push the progress once things are more stitched together, the implementation as of now is sort of disparate.
I am a little fuzzy on the above-mentioned aspects, it'd be great if you could provide a few pointers regarding the same! Thanks!

@wojtek-t
Copy link
Member

wojtek-t commented May 2, 2022

Do we replicate (more or less) the limit/continue implementation that exists here

More-or-less but we should be much simpler:
(a) the logic there needs to handle multiple requests to etcd - we don't need that - we just iterate over the tree until we get enough items (or process the whole tree)
(b) we shouldn't handle the ResourceVersionExactMatch logic - let's just delegate them to etcd directly
[maybe more]

After trying to implement it, I ended up with this function to do a read from the btree while honouring the limit:

Yeah - that sounds roughly what I expected

Do we need to handle continue tokens timing out?

I thought I described it here, but apparently not.
The way I think about it is the following:

  • decode continue (which inside has an RV)
  • if we have RV in memory - handle that request
  • if we don't have it anymore, let's delegate down to etcd

Would we need to use Versioner here?

I don't fully understand the question - can you clarify?

@aramase
Copy link
Member

aramase commented Sep 26, 2022

Please feel free to tag /sig auth for any additional reviews.

/remove-sig auth

@k8s-ci-robot k8s-ci-robot removed the sig/auth Categorizes an issue or PR as relevant to SIG Auth. label Sep 26, 2022
@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 14, 2022
@k8s-ci-robot
Copy link
Contributor

@MadhavJivrajani: PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

storeClone = w.continueCache.cache[resourceVersion]
} else {
storeClone = w.store.Clone()
w.continueCache.cache[resourceVersion] = storeClone
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It needs a lock to avoid concurrenct map read/write, which crashes the kube-apiserver

@lavalamp
Copy link
Member

I am thinking, rather than fix the watch cache like this, what if we focused on making clients use watch w/ catch up event as in kubernetes/enhancements#3667?

@dims
Copy link
Member

dims commented Dec 12, 2022

If you still need this PR then please rebase, if not, please close the PR

@dims
Copy link
Member

dims commented Dec 12, 2022

This PR has the label work-in-progress, please revisit to see if you still need this, please close if not

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 12, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 11, 2023
@MadhavJivrajani
Copy link
Contributor Author

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Apr 12, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 11, 2023
@dims
Copy link
Member

dims commented Oct 24, 2023

This work-in-progress PR needs a rebase. please rebase if this PR is still needed for the upcoming release.

@dims dims added the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jan 4, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/apiserver area/cloudprovider area/dependency Issues or PRs related to dependency changes area/test cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-kind Indicates a PR lacks a `kind/foo` label and requires one. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/cloud-provider Categorizes an issue or PR as relevant to SIG Cloud Provider. sig/scalability Categorizes an issue or PR as relevant to SIG Scalability. sig/testing Categorizes an issue or PR as relevant to SIG Testing. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants