-
Notifications
You must be signed in to change notification settings - Fork 367
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Build A MVP Persistent Caching Proxy #342
Comments
2 seems quite a lot harder than 1 FWIW |
The freshness problem we have either way. So I don't think one or the other is harder. The first is reboot safe, the second is operationally simpler. |
Clearing milestone to re-triage |
Part of multi-release epic #1225 |
@p0lyn0mial FYI for tracking, this is the issue for implementing the cache server, would be cool to link PRs to it as new ones come in |
@p0lyn0mial @sttts is this going to be completed this week for v0.10? It looks like we still have several outstanding items? |
we need to finish the workspace refactoring before we can finish this feature. |
I think we can close this issue. Not all items have been implemented but it is unclear to me if we will use the cache server in the long-run. Perhaps it will be replaced by CRDB. |
👍 |
@ncdc: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Certain data in a kcp cluster is critical for operation and SPOF in a multi-region/AZ environment, e.g. org workspaces. It is feasible to build a consistent cache hierarchy which
and which serve consistent data by checking freshness of the cache on quorum reads. Using such a setup could give read-only availability to e.g. org workspaces.
Big question: do we run into time-travel problems as we do with pods and kubelets. So we must be super careful when answering consistent reads with potentially stale data in an outage situation. But for certain operation like the personal workspace virtual workspace from @davidfestal, a stale read is good enough.
Acceptance Criteria
We're just looking for an MVP implementation here:
Items
cmd/cache-server
andpkg/cache/server/{config.go,server.go}
andpkg/cache/server/options
plumbing (cache-server scaffolding #1790)apiextensions-apiserver
and wire CRD lister similar to today'spkg/server/apiextensions.go
with at least APIExports and APIResourceSchemaskcp/pkg/virtual/apiexport/schemas/apis.go
Line 36 in 8d2ed6a
/registry/group/cache:resource:identity:shard/…
(cache-server: adds WithShardScope HTTP filter #1841)/services/cache/shards/*/clusters/*
(🌱 cache: expose the cache-server under "/services/cache" path #1961)/services/cache/shards/{shard-name}/clusters/{cluster-name}
(🌱 cache: expose the cache-server under "/services/cache" path #1961)kcp
binary. (🌱 cache: small refactor to make wiring into kcp easier #1949, 🌱 cache: moves common HTTP handlers to a shared pkg #1947, 🌱 Wire the cache server into the kcp server #1954)--cache-url
intokcp
, defaulting to localhost. (🌱 kcp: add flags related to the cache server #1970)authorization
data removal
shard management, a few loose ideas:
what will decide when a new shard needs to be created?
which cluster it gets deployed to?
making other shards aware of the new shard?
on kcp server
TemporaryRootShardKcpSharedInformerFactory
) in a dedicated post-start-hookThe text was updated successfully, but these errors were encountered: