Skip to content

chore: version selection benchmarking#1064

Merged
adityachoudhari26 merged 3 commits intomainfrom
version-selector-benchmarking
Apr 28, 2026
Merged

chore: version selection benchmarking#1064
adityachoudhari26 merged 3 commits intomainfrom
version-selector-benchmarking

Conversation

@adityachoudhari26
Copy link
Copy Markdown
Member

@adityachoudhari26 adityachoudhari26 commented Apr 24, 2026

resolves #1062

Summary by CodeRabbit

  • Tests
    • Added performance benchmarks for metadata-based selectors: measures steady-state selector evaluation, a native equality baseline, and compile-only cost.
    • Benchmarks exercise varying metadata sizes and corpus sizes, reporting throughput (versions/sec) and allocation metrics to track execution and compile-time efficiency.

Copilot AI review requested due to automatic review settings April 24, 2026 23:31
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 24, 2026

Warning

Rate limit exceeded

@adityachoudhari26 has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 49 minutes and 12 seconds before requesting another review.

To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 861446e5-ba37-4b60-970f-709b1cf3ee9d

📥 Commits

Reviewing files that changed from the base of the PR and between 9487920 and 29ba423.

📒 Files selected for processing (1)
  • apps/workspace-engine/pkg/workspace/releasemanager/policy/evaluator/versionselector/metadata_selector_bench_test.go
📝 Walkthrough

Walkthrough

Adds a new benchmark test file for the versionselector package that measures CEL selector evaluation, a native equality baseline, and compile-time cost using generated synthetic deployment version metadata (parameterized by corpus size and metadata map size).

Changes

Cohort / File(s) Summary
Benchmark Tests
apps/workspace-engine/pkg/workspace/releasemanager/policy/evaluator/versionselector/metadata_selector_bench_test.go
New Go benchmark file adding BenchmarkMetadataSelector_Eval, BenchmarkMetadataSelector_NativeEq, and BenchmarkMetadataSelector_Compile. Generates deterministic per-version contexts, measures allocations, reports versions/sec, and provides compileUncached helper to force uncached compilation.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Poem

🐰 I nibble data, hop and test,
CEL and native put to quest;
Compiles and counts, I skip and run,
Versions per second — benchmark fun! 🥕

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'chore: version selection benchmarking' accurately summarizes the main change: adding benchmark tests for version selection with CEL.
Linked Issues check ✅ Passed The PR adds comprehensive benchmarking for deployment version CEL filtering, directly addressing issue #1062's requirement for performance benchmarks.
Out of Scope Changes check ✅ Passed All changes are focused on adding benchmark tests for version selection, which is directly aligned with the linked issue about CEL filtering benchmarks.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch version-selector-benchmarking

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds a Go benchmark suite to measure CEL-based deployment version metadata selector performance, addressing issue #1062.

Changes:

  • Introduces benchmarks for steady-state CEL evaluation across selector shapes, corpus sizes, and metadata map sizes.
  • Adds a native Go baseline benchmark for the env == "prod" check to estimate CEL overhead.
  • Adds a compile-only benchmark intended to measure compilation cost without program-cache hits.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +58 to +60
for i := range n {
meta := make(map[string]any, mapSize)

Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for i := range n is invalid because range can’t iterate over an int. This won’t compile; use an indexed loop to fill contexts (e.g., increment i from 0..n-1).

Copilot uses AI. Check for mistakes.
Comment on lines +77 to +79
for k := range mapSize - len(meta) {
meta[fmt.Sprintf("filler_%d", k)] = fmt.Sprintf("value_%d", r.Intn(1_000_000))
}
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for k := range mapSize - len(meta) is invalid (range can’t iterate over an int, and mapSize - len(meta) is an int expression). This prevents the benchmark from compiling; switch to a standard counted loop, and guard against mapSize < len(meta) if that’s possible.

Copilot uses AI. Check for mistakes.
for i := 0; i < b.N; i++ {
matches = 0
for _, ctx := range contexts {
ok, _ := evaluate(program, ctx)
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The benchmark ignores the error returned by evaluate(program, ctx). If evaluation fails (e.g., due to a non-missing-key runtime error), the benchmark will silently treat it as a non-match and continue, which can invalidate results. Consider failing the benchmark on non-nil error (or at least counting/reporting errors).

Suggested change
ok, _ := evaluate(program, ctx)
ok, err := evaluate(program, ctx)
if err != nil {
b.Fatalf("evaluate %q: %v", shape.label, err)
}

Copilot uses AI. Check for mistakes.
Comment on lines +170 to +171
// ristretto cache in compiledEnv by building a fresh env per run. Useful to
// size cache-miss impact separately from the steady-state Eval cost.
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment claims BenchmarkMetadataSelector_Compile bypasses the cache “by building a fresh env per run”, but compileUncached uses compiledEnv.Env() which returns the existing underlying *cel.Env (no new env is created). Update the comment to reflect it only bypasses the ristretto program cache, or create a new env per iteration if that’s what you want to measure.

Suggested change
// ristretto cache in compiledEnv by building a fresh env per run. Useful to
// size cache-miss impact separately from the steady-state Eval cost.
// ristretto program cache in compiledEnv. Useful to size cache-miss impact
// separately from the steady-state Eval cost.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@apps/workspace-engine/pkg/workspace/releasemanager/policy/evaluator/versionselector/metadata_selector_bench_test.go`:
- Around line 112-116: The benchmark currently ignores errors from evaluate (ok,
_ := evaluate(...)), which can hide failures; change the call to capture the
error (ok, err := evaluate(program, ctx)) and handle it (e.g., b.Fatalf or
b.Errorf + continue) so the benchmark fails loudly on evaluator errors; keep the
matches increment logic only when ok is true and include the error in the
failure message to aid debugging.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 07deb073-a2c8-4241-96e1-a0dfe3758a83

📥 Commits

Reviewing files that changed from the base of the PR and between 451e4f6 and 91506e5.

📒 Files selected for processing (1)
  • apps/workspace-engine/pkg/workspace/releasemanager/policy/evaluator/versionselector/metadata_selector_bench_test.go

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
apps/workspace-engine/pkg/workspace/releasemanager/policy/evaluator/versionselector/metadata_selector_bench_test.go (1)

115-118: ⚠️ Potential issue | 🟠 Major

Fail fast on evaluator errors in the hot loop.

Ignoring evaluate errors can silently corrupt benchmark results and hide breakages.

Proposed fix
-						ok, _ := evaluate(program, ctx)
+						ok, err := evaluate(program, ctx)
+						if err != nil {
+							b.Fatalf("evaluate shape=%q failed: %v", shape.label, err)
+						}
 						if ok {
 							matches++
 						}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/workspace-engine/pkg/workspace/releasemanager/policy/evaluator/versionselector/metadata_selector_bench_test.go`
around lines 115 - 118, The benchmark loop calls evaluate(program, ctx) and
discards the error (ok, _ := evaluate(...)), which can hide failures and corrupt
results; change the call to capture the error (ok, err := evaluate(program,
ctx)) and fail fast inside the hot loop when err != nil (e.g., b.Fatalf/t.Fatalf
or panic) so the benchmark stops and surfaces the underlying failure instead of
silently counting matches; update the logic around matches to only increment
when err == nil && ok to preserve semantics.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In
`@apps/workspace-engine/pkg/workspace/releasemanager/policy/evaluator/versionselector/metadata_selector_bench_test.go`:
- Around line 115-118: The benchmark loop calls evaluate(program, ctx) and
discards the error (ok, _ := evaluate(...)), which can hide failures and corrupt
results; change the call to capture the error (ok, err := evaluate(program,
ctx)) and fail fast inside the hot loop when err != nil (e.g., b.Fatalf/t.Fatalf
or panic) so the benchmark stops and surfaces the underlying failure instead of
silently counting matches; update the logic around matches to only increment
when err == nil && ok to preserve semantics.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 84149e16-acbf-4a15-9263-7a0018badb3d

📥 Commits

Reviewing files that changed from the base of the PR and between 91506e5 and 9487920.

📒 Files selected for processing (1)
  • apps/workspace-engine/pkg/workspace/releasemanager/policy/evaluator/versionselector/metadata_selector_bench_test.go

@adityachoudhari26 adityachoudhari26 merged commit e832354 into main Apr 28, 2026
10 checks passed
@adityachoudhari26 adityachoudhari26 deleted the version-selector-benchmarking branch April 28, 2026 14:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

perf: deployment version CEL filtering benchmarks

2 participants