refactor: split ProjectQuota CRD into per-AZ CRDs#827
Conversation
umswmayj
commented
May 8, 2026
- Add AvailabilityZone field to ProjectQuotaSpec
- Change CRD naming from 'quota-{projectID}' to 'quota-{projectID}-{az}'
- Status fields (TotalUsage, PaygUsage) are now flat map[string]int64 per AZ
- Add ProjectQuotaResourceRouter in multicluster routers.go
- Update QuotaController to manage per-AZ CRDs
- Update HandleQuota API to create one CRD per project+AZ combination
- Update usage.go to aggregate across per-AZ CRDs
- derivePaygUsage is preserved and used in both reconcile paths
- Re-add Create_EmptyResources and Update_WithNewMetadata test cases
- Add partial AZ coverage tests (unit + integration)
- Add total calculation verification tests with multi-resource multi-AZ
|
Warning Rate limit exceeded
You’ve run out of usage credits. Purchase more in the billing tab. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
📝 WalkthroughWalkthroughThis PR refactors ProjectQuota storage from a single CRD per project with embedded per-availability-zone data to one flat CRD per project per availability zone, removes intermediate structs, flattens quota/usage to ChangesProjectQuota Per-Availability-Zone Refactor
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/scheduling/reservations/quota/controller.go (1)
1000-1011:⚠️ Potential issue | 🔴 Critical | ⚡ Quick winCritical:
mapCRToProjectQuotauses the old (non-AZ) name and will never match aProjectQuotaCRD.After this PR,
ProjectQuotaCRDs are namedquota-<projectID>-<az>, but the mapper still returns"quota-" + cr.Spec.ProjectID. The resulting reconcile request never matches any CRD, soReconcile()will hitIsNotFoundand silently return — meaning CRUsedAmountchanges will no longer trigger a PaygUsage recompute on any per-AZ CRD, defeating the whole watch wiring set up inSetupWithManager(lines 632–648).Since one
CommittedResourceis scoped to one AZ (cr.Spec.AvailabilityZone), the mapper should target that AZ's CRD directly.🔧 Proposed fix
// mapCRToProjectQuota maps a CommittedResource change to the affected ProjectQuota reconcile request. func (c *QuotaController) mapCRToProjectQuota(_ context.Context, obj client.Object) []reconcile.Request { cr, ok := obj.(*v1alpha1.CommittedResource) if !ok { return nil } - // Map to the ProjectQuota for this project - crdName := "quota-" + cr.Spec.ProjectID + if cr.Spec.AvailabilityZone == "" { + return nil + } + // Map to the per-AZ ProjectQuota matching this CR's AZ. + crdName := "quota-" + cr.Spec.ProjectID + "-" + cr.Spec.AvailabilityZone return []reconcile.Request{ {NamespacedName: client.ObjectKey{Name: crdName}}, } }Also worth adding test coverage that asserts a CR
UsedResourceschange enqueues a reconcile for the matching per-AZ CRD.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@internal/scheduling/reservations/quota/controller.go` around lines 1000 - 1011, mapCRToProjectQuota currently builds the old non-AZ CRD name ("quota-"+cr.Spec.ProjectID) so its reconcile.Request never matches per-AZ ProjectQuota CRDs; update the mapper (mapCRToProjectQuota) to cast obj to *v1alpha1.CommittedResource, read cr.Spec.ProjectID and cr.Spec.AvailabilityZone, and return a reconcile.Request for the per-AZ name "quota-<projectID>-<az>" (use the same naming scheme as ProjectQuota CRDs); also add/adjust unit test(s) to assert that a CommittedResource UsedResources/UsedAmount change enqueues a reconcile for the matching per-AZ ProjectQuota CRD.
🧹 Nitpick comments (4)
internal/scheduling/reservations/commitments/api/quota.go (1)
180-194: ⚡ Quick winSilently swallowed List error during orphan cleanup.
if err := api.client.List(...); err == nil { ... }discards any list error. If listing fails (e.g. cache not synced, transient API error), orphan AZ CRDs are silently left behind with no log line. Even though orphan cleanup is best-effort, it should at least be observable.🔧 Proposed fix
- // Delete orphan CRDs for AZs no longer present in the quota push. - var pqList v1alpha1.ProjectQuotaList - if err := api.client.List(ctx, &pqList, client.MatchingFields{idxProjectQuotaByProjectID: projectID}); err == nil { - for i := range pqList.Items { + // Delete orphan CRDs for AZs no longer present in the quota push. + var pqList v1alpha1.ProjectQuotaList + if err := api.client.List(ctx, &pqList, client.MatchingFields{idxProjectQuotaByProjectID: projectID}); err != nil { + log.Error(err, "failed to list ProjectQuota CRDs for orphan cleanup", "projectID", projectID) + } else { + for i := range pqList.Items { pq := &pqList.Items[i] ... } }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@internal/scheduling/reservations/commitments/api/quota.go` around lines 180 - 194, The List error from api.client.List(ctx, &pqList, client.MatchingFields{idxProjectQuotaByProjectID: projectID}) is being ignored; change the logic in the orphan-cleanup block so errors are logged when List fails (rather than silently skipping), and only iterate pqList.Items when List returned nil; specifically, after calling api.client.List(...) check err != nil and use log.Error(err, ...) to record the failure (including context like "failed to list ProjectQuota", "projectID", projectID) and return/skip cleanup, otherwise proceed to iterate pqList.Items and delete orphan ProjectQuota entries as before.internal/scheduling/reservations/commitments/usage.go (1)
147-154: ⚡ Quick winSilent fallback when listing per-AZ
ProjectQuotaCRDs fails.The condition
if err == nil && len(pqList.Items) > 0collapses two distinct cases — "list failed" and "no quota pushed yet" — into the same silent path that leavesquotaByResourceAZnil and reports infinite quota for every AZ. If the field index isn't registered or the API server returns a transient error, callers (Limes) will see infinite quota with no signal in logs.Recommend logging the error and possibly returning it (since infinite-quota fallback can mask real misconfigurations).
🔧 Proposed fix
- var pqList v1alpha1.ProjectQuotaList - var quotaByResourceAZ map[string]map[string]int64 - if err := c.client.List(ctx, &pqList, client.MatchingFields{idxProjectQuotaByProjectID: projectID}); err == nil && len(pqList.Items) > 0 { - quotaByResourceAZ = buildCombinedQuotaMap(pqList.Items) - } + var pqList v1alpha1.ProjectQuotaList + var quotaByResourceAZ map[string]map[string]int64 + if err := c.client.List(ctx, &pqList, client.MatchingFields{idxProjectQuotaByProjectID: projectID}); err != nil { + log.Error(err, "failed to list ProjectQuota CRDs; falling back to infinite quota", "projectID", projectID) + } else if len(pqList.Items) > 0 { + quotaByResourceAZ = buildCombinedQuotaMap(pqList.Items) + }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@internal/scheduling/reservations/commitments/usage.go` around lines 147 - 154, The current logic silently treats any c.client.List failure the same as an empty result by only proceeding when err == nil && len(pqList.Items) > 0, which can hide API/indexing errors and cause infinite-quota fallbacks; change the handling around c.client.List(ctx, &pqList, client.MatchingFields{idxProjectQuotaByProjectID: projectID}) so that if err != nil you log the error (including err and context like projectID) and propagate/return it (or at minimum surface it to the caller) instead of leaving quotaByResourceAZ nil; keep the successful path that calls buildCombinedQuotaMap(pqList.Items) when err == nil and handle the explicit empty-list case separately to preserve the intended "no quota pushed yet" behavior.internal/scheduling/reservations/commitments/api/quota_test.go (1)
354-388: ⚡ Quick winTest does not assert orphan AZ cleanup behavior.
The verification loop iterates
tc.expectPerAZand confirms each expected AZ CRD exists with the right values, but it does not assert that AZs no longer in the request were deleted (the orphan-cleanup path inHandleQuotalines 180–194). Consider adding a case that seeds an existing CRD for an AZ not present in the new request and asserts it is gone after the call — that path is otherwise untested.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@internal/scheduling/reservations/commitments/api/quota_test.go` around lines 354 - 388, Add an assertion that verifies orphan AZ CRDs are deleted: in the test case setup for the scenario exercising HandleQuota (the loop using tc.expectPerAZ and projectQuotaCRDName), create/seed an extra ProjectQuota CRD for an AZ that is NOT present in tc.expectPerAZ (use k8sClient.Create with a v1alpha1.ProjectQuota named via projectQuotaCRDName(tc.projectID, orphanAZ) and with Spec.ProjectID set to tc.projectID), call the function under test (HandleQuota), then attempt to GET that orphan CRD and assert it no longer exists (expect a not-found error, e.g. apierrors.IsNotFound(err)); place this check alongside the existing verification so the orphan-cleanup path (lines ~180–194 in HandleQuota) is exercised.internal/scheduling/reservations/quota/integration_test.go (1)
1293-1301: ⚡ Quick winRemove obsolete project-scoped
makePQhelper.Line 1293 keeps a legacy
quota-<project>fixture helper (suppressed withnolint). Since tests are now AZ-scoped, removing it avoids accidental reintroduction of old naming.Suggested cleanup
-func makePQ(projectID string, lastReconcileAt *metav1.Time) *v1alpha1.ProjectQuota { //nolint:unused - return &v1alpha1.ProjectQuota{ - ObjectMeta: metav1.ObjectMeta{Name: "quota-" + projectID}, - Spec: v1alpha1.ProjectQuotaSpec{ProjectID: projectID, DomainID: "domain-1"}, - Status: v1alpha1.ProjectQuotaStatus{ - LastReconcileAt: lastReconcileAt, - }, - } -} - // makePQPerAZ creates a per-AZ ProjectQuota CRD for integration tests. func makePQPerAZ(projectID, az string, lastReconcileAt *metav1.Time) *v1alpha1.ProjectQuota { //nolint:unparam🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@internal/scheduling/reservations/quota/integration_test.go` around lines 1293 - 1301, Remove the obsolete project-scoped helper makePQ: delete the makePQ function (and its nolint comment) from the test file and update any tests referencing makePQ to use the current AZ-scoped helpers/fixtures instead (search for makePQ to find usages and replace with the appropriate AZ-scoped factory or inline fixture creation).
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@internal/scheduling/reservations/commitments/api/quota.go`:
- Around line 145-151: The Create error handling in the closure that calls
api.client.Create should not just return when apierrors.IsAlreadyExists is
observed; retry.RetryOnConflict only retries on IsConflict, so change the branch
in the closure (where api.client.Create(ctx, pq) is called) to either perform a
GET+Update flow when apierrors.IsAlreadyExists is returned (fetch the existing
quota, merge/apply changes and Update), or re-wrap/convert the AlreadyExists
into a Conflict so retry.RetryOnConflict will re-run the closure; update the
code paths around api.client.Create, apierrors.IsAlreadyExists and the
retry.RetryOnConflict usage (and adjust HandleQuota callers if needed) so
concurrent creates converge to an update rather than returning the original
createErr.
In `@internal/scheduling/reservations/quota/controller_test.go`:
- Around line 229-231: The test currently verifies AZ exclusion by checking
ramUsage["az-2"] != 0 which falsely passes when the key is missing because map
lookups default to zero; update the assertion to verify the key is absent
instead (e.g., use the comma-ok form to check presence: _, ok :=
ramUsage["az-2"] and assert that ok is false) so the test fails if "az-2" is
present; modify the assertion around the ramUsage lookup in the test (variable
name ramUsage in controller_test.go) accordingly.
In `@internal/scheduling/reservations/quota/integration_test.go`:
- Around line 1080-1124: The current verifyTotalUsage helper only checks AZs
present in the expected map and misses any extra ProjectQuota CRDs written for
the same project; update verifyTotalUsage to also List all v1alpha1.ProjectQuota
objects (use env.client.List into a v1alpha1.ProjectQuotaList), filter those
whose Name has the "quota-"+projectID+"-" prefix, extract the AZ suffix, and if
an AZ is not present in the expected perAZ map but its
ProjectQuota.Status.TotalUsage is non-empty, call env.t.Errorf/fatal to fail the
test; apply the same pattern to the other per-AZ verifier helper in this file
(the one immediately after verifyTotalUsage) so unexpected AZ CRDs with
populated usage are detected.
---
Outside diff comments:
In `@internal/scheduling/reservations/quota/controller.go`:
- Around line 1000-1011: mapCRToProjectQuota currently builds the old non-AZ CRD
name ("quota-"+cr.Spec.ProjectID) so its reconcile.Request never matches per-AZ
ProjectQuota CRDs; update the mapper (mapCRToProjectQuota) to cast obj to
*v1alpha1.CommittedResource, read cr.Spec.ProjectID and
cr.Spec.AvailabilityZone, and return a reconcile.Request for the per-AZ name
"quota-<projectID>-<az>" (use the same naming scheme as ProjectQuota CRDs); also
add/adjust unit test(s) to assert that a CommittedResource
UsedResources/UsedAmount change enqueues a reconcile for the matching per-AZ
ProjectQuota CRD.
---
Nitpick comments:
In `@internal/scheduling/reservations/commitments/api/quota_test.go`:
- Around line 354-388: Add an assertion that verifies orphan AZ CRDs are
deleted: in the test case setup for the scenario exercising HandleQuota (the
loop using tc.expectPerAZ and projectQuotaCRDName), create/seed an extra
ProjectQuota CRD for an AZ that is NOT present in tc.expectPerAZ (use
k8sClient.Create with a v1alpha1.ProjectQuota named via
projectQuotaCRDName(tc.projectID, orphanAZ) and with Spec.ProjectID set to
tc.projectID), call the function under test (HandleQuota), then attempt to GET
that orphan CRD and assert it no longer exists (expect a not-found error, e.g.
apierrors.IsNotFound(err)); place this check alongside the existing verification
so the orphan-cleanup path (lines ~180–194 in HandleQuota) is exercised.
In `@internal/scheduling/reservations/commitments/api/quota.go`:
- Around line 180-194: The List error from api.client.List(ctx, &pqList,
client.MatchingFields{idxProjectQuotaByProjectID: projectID}) is being ignored;
change the logic in the orphan-cleanup block so errors are logged when List
fails (rather than silently skipping), and only iterate pqList.Items when List
returned nil; specifically, after calling api.client.List(...) check err != nil
and use log.Error(err, ...) to record the failure (including context like
"failed to list ProjectQuota", "projectID", projectID) and return/skip cleanup,
otherwise proceed to iterate pqList.Items and delete orphan ProjectQuota entries
as before.
In `@internal/scheduling/reservations/commitments/usage.go`:
- Around line 147-154: The current logic silently treats any c.client.List
failure the same as an empty result by only proceeding when err == nil &&
len(pqList.Items) > 0, which can hide API/indexing errors and cause
infinite-quota fallbacks; change the handling around c.client.List(ctx, &pqList,
client.MatchingFields{idxProjectQuotaByProjectID: projectID}) so that if err !=
nil you log the error (including err and context like projectID) and
propagate/return it (or at minimum surface it to the caller) instead of leaving
quotaByResourceAZ nil; keep the successful path that calls
buildCombinedQuotaMap(pqList.Items) when err == nil and handle the explicit
empty-list case separately to preserve the intended "no quota pushed yet"
behavior.
In `@internal/scheduling/reservations/quota/integration_test.go`:
- Around line 1293-1301: Remove the obsolete project-scoped helper makePQ:
delete the makePQ function (and its nolint comment) from the test file and
update any tests referencing makePQ to use the current AZ-scoped
helpers/fixtures instead (search for makePQ to find usages and replace with the
appropriate AZ-scoped factory or inline fixture creation).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 5a6256f2-8ee2-4457-adb3-1dfcac54d15b
📒 Files selected for processing (11)
api/v1alpha1/project_quota_types.goapi/v1alpha1/zz_generated.deepcopy.gohelm/library/cortex/files/crds/cortex.cloud_projectquotas.yamlinternal/scheduling/reservations/commitments/api/quota.gointernal/scheduling/reservations/commitments/api/quota_test.gointernal/scheduling/reservations/commitments/api/report_usage_test.gointernal/scheduling/reservations/commitments/usage.gointernal/scheduling/reservations/quota/controller.gointernal/scheduling/reservations/quota/controller_test.gointernal/scheduling/reservations/quota/integration_test.gopkg/multicluster/routers.go
- Add AvailabilityZone field to ProjectQuotaSpec
- Change CRD naming from 'quota-{projectID}' to 'quota-{projectID}-{az}'
- Status fields (TotalUsage, PaygUsage) are now flat map[string]int64 per AZ
- Add ProjectQuotaResourceRouter in multicluster routers.go
- Update QuotaController to manage per-AZ CRDs
- Update HandleQuota API to create one CRD per project+AZ combination
- Update usage.go to aggregate across per-AZ CRDs
- Fix mapCRToProjectQuota to use per-AZ CRD name
- Fix crUsedAmountChangePredicate: CreateFunc now returns true
so CR creation triggers quota reconcile (PaygUsage recompute)
- derivePaygUsage is preserved and used in both reconcile paths
- Re-add Create_EmptyResources and Update_WithNewMetadata test cases
- Add partial AZ coverage tests (unit + integration)
- Add total calculation verification tests with multi-resource multi-AZ
- Fix crUsedAmountChangePredicate: CreateFunc now returns true
so creating a CommittedResource triggers PaygUsage recompute
- Fix mapCRToProjectQuota: use per-AZ CRD name (quota-{project}-{az})
so both create and delete events route to the correct ProjectQuota
- Add DomainName verification to quota_test.go (Create_WithMetadata
and Update_WithNewMetadata cases now assert DomainName is set)
- quota.go: wrap IsAlreadyExists as conflict error so RetryOnConflict re-runs the closure and falls into the Get+Update branch on races - controller_test.go: assert AZ key absence with comma-ok idiom instead of comparing to zero (which passes for missing keys too) - integration_test.go: add guards in verifyTotalUsage and verifyPaygUsage that scan all ProjectQuota CRDs for the project and fail if an unexpected AZ CRD has non-empty usage (catches extra writes)
6b39c7a to
822a907
Compare
Resolved conflicts: - pkg/multicluster/routers.go: kept both ProjectQuotaResourceRouter and FlavorGroupCapacityResourceRouter - internal/scheduling/reservations/commitments/usage.go: kept quotaByResourceAZ (our per-AZ approach) and added config APIConfig parameter from main
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@helm/library/cortex/files/crds/cortex.cloud_projectquotas.yaml`:
- Around line 68-72: The CRD schema for spec.availabilityZone currently only
declares type: string (field availabilityZone) which allows empty strings;
update the schema for availabilityZone (spec.availabilityZone) to disallow empty
values by adding minLength: 1 (or a stricter pattern property if desired) under
the availabilityZone schema so admission rejects "" values and ensures valid
zone names at creation time.
- Around line 94-97: The CRD allows negative values for the quota and usage
maps; update the schemas for the map value definitions (the additionalProperties
blocks used for quota and for usage) to enforce non-negative integers by adding
"minimum: 0" to those additionalProperties objects (they already have "type:
integer" and "format: int64") so map entries cannot be negative; apply the same
change to the second occurrence referenced in the comment.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: d89676ef-4cad-48a4-8ec8-6c9bd363663a
📒 Files selected for processing (11)
api/v1alpha1/project_quota_types.goapi/v1alpha1/zz_generated.deepcopy.gohelm/library/cortex/files/crds/cortex.cloud_projectquotas.yamlinternal/scheduling/reservations/commitments/api/quota.gointernal/scheduling/reservations/commitments/api/quota_test.gointernal/scheduling/reservations/commitments/api/report_usage_test.gointernal/scheduling/reservations/commitments/usage.gointernal/scheduling/reservations/quota/controller.gointernal/scheduling/reservations/quota/controller_test.gointernal/scheduling/reservations/quota/integration_test.gopkg/multicluster/routers.go
✅ Files skipped from review due to trivial changes (1)
- api/v1alpha1/zz_generated.deepcopy.go
🚧 Files skipped from review as they are similar to previous changes (9)
- pkg/multicluster/routers.go
- internal/scheduling/reservations/commitments/api/report_usage_test.go
- internal/scheduling/reservations/commitments/usage.go
- internal/scheduling/reservations/quota/controller_test.go
- api/v1alpha1/project_quota_types.go
- internal/scheduling/reservations/commitments/api/quota.go
- internal/scheduling/reservations/quota/controller.go
- internal/scheduling/reservations/commitments/api/quota_test.go
- internal/scheduling/reservations/quota/integration_test.go
- Add minLength:1 to spec.availabilityZone (prevent empty string) - Add minimum:0 to quota, paygUsage, totalUsage map values (prevent negatives) - Add +kubebuilder:validation:MinLength=1 marker to Go type
- Add minLength:1 to spec.availabilityZone CRD schema (prevents empty string) - Add +kubebuilder:validation:MinLength=1 marker to Go type - Note: minimum:0 on map values not achievable via controller-gen markers
e223d76 to
4a84f0c
Compare
Test Coverage ReportTest Coverage 📊: 69.1% |
PR #827 split the ProjectQuota CRD into per-AZ CRDs and added a ProjectQuotaResourceRouter in pkg/multicluster/routers.go, but this router was never registered in the ResourceRouters map in cmd/manager/main.go. This is the same class of bug that affected FlavorGroupCapacity (fixed in PR #828) where the router implementation existed but was not wired into the manager entrypoint. Without this registration, ProjectQuota CRDs are not routed to the correct per-AZ clusters in multi-cluster deployments, meaning quota data would not be synced properly. This commit adds the ProjectQuota GVK and its corresponding ProjectQuotaResourceRouter to the ResourceRouters map, following the exact same pattern used by all other registered routers. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-authored-by: cortex-ai-agents[bot] <279748396+cortex-ai-agents[bot]@users.noreply.github.com> Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>