Skip to content

Conversation

@tobz
Copy link
Member

@tobz tobz commented Jan 23, 2026

Summary

This PR switches to a new approach for generating our SMP experiment definitions from a centralized configuration file, supporting templating, overrides, and parameterization.

Currently, the highest friction aspect of SMP experiments is defining them: it generally involves a lot of copy/paste, and nothing can be shared. Defining a new experiment that differs from an existing experiment by only a single value means having to completely copy/paste all relevant files, maintaining the same directory structure, and so on. This can be very cumbersome.

This PR introduces a new approach based on a single, centralized experiments.yaml file, in conjunction with a new script (generate_experiments.yaml) for actually taking the centralized configuration and spitting out the individual experiment directories/files.

This new configuration was designed specifically to address the pain points and shortcomings with the previous approach:

  • inheritance-based definitions
    • "global" configuration is used as the base for all experiments
    • template fragments can be defined to further group shared portions of experiment configuration
    • individual experiments can extend from template fragments, and their definitions override both the global configuration and template fragment configuration
  • optimization goals can be defined as a single item or multiple, and the generator will create the relevant clones of the experiment for each optimization goal
  • support for declaring file content (e.g., the stuff under /etc/agent-data-plane in the target) inline, or by symlinking local files, allowing for more easily including necessary files without having to copy/paste them multiple times

We've also added two Make targets -- generate-smp-experiments and check-smp-experiments -- for regenerating the individual experiment definitions as well as checking that the existing individual experiment definitions are up-to-date with experiments.yaml.

Change Type

  • Bug fix
  • New feature
  • Non-functional (chore, refactoring, docs)
  • Performance

How did you test this PR?

Ran the new Make targets, and ensured the generated experiment files looked correct. Further testing by ensuring all defined experiments run/pass in CI.

References

AGTMETRICS-393

@tobz tobz requested a review from a team as a code owner January 23, 2026 16:04
@tobz tobz added the type/chore Updates to dependencies or general "administrative" tasks necessary to maintain the codebase/repo. label Jan 23, 2026
@dd-octo-sts dd-octo-sts bot added the area/test All things testing: unit/integration, correctness, SMP regression, etc. label Jan 23, 2026
@pr-commenter
Copy link

pr-commenter bot commented Jan 23, 2026

Regression Detector (Agent Data Plane)

Regression Detector Results

Run ID: 99b6afbc-4d19-4171-9c56-38b94bc75950

Baseline: 779305a
Comparison: 01e9854
Diff

❌ Experiments with retried target crashes

This is a critical error. One or more replicates failed with a non-zero exit code. These replicates may have been retried. See Replicate Execution Details for more information.

  • dsd_uds_512kb_3k_contexts_memory
  • quality_gates_rss_dsd_heavy
  • quality_gates_rss_dsd_ultraheavy

Optimization Goals: ✅ No significant changes detected

Experiments ignored for regressions

Regressions in experiments with settings containing erratic: true are ignored.

perf experiment goal Δ mean % Δ mean % CI trials links
otlp_ingest_logs_adp memory utilization +1.01 [+0.78, +1.24] 1 (metrics) (profiles) (logs)

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI trials links
otlp_ingest_logs_adp memory utilization +1.01 [+0.78, +1.24] 1 (metrics) (profiles) (logs)
dsd_uds_1mb_3k_contexts_cpu % cpu utilization +0.96 [-51.31, +53.23] 1 (metrics) (profiles) (logs)
quality_gates_rss_dsd_low memory utilization +0.35 [+0.22, +0.49] 1 (metrics) (profiles) (logs)
dsd_uds_10mb_3k_contexts_cpu % cpu utilization +0.35 [-30.80, +31.50] 1 (metrics) (profiles) (logs)
dsd_uds_512kb_3k_contexts_memory memory utilization +0.29 [+0.14, +0.44] 1 (metrics) (profiles) (logs)
dsd_uds_512kb_3k_contexts_cpu % cpu utilization +0.25 [-54.64, +55.14] 1 (metrics) (profiles) (logs)
quality_gates_rss_idle memory utilization +0.24 [+0.21, +0.28] 1 (metrics) (profiles) (logs)
quality_gates_rss_dsd_medium memory utilization +0.24 [+0.07, +0.41] 1 (metrics) (profiles) (logs)
dsd_uds_100mb_3k_contexts_memory memory utilization +0.09 [-0.08, +0.25] 1 (metrics) (profiles) (logs)
dsd_uds_500mb_3k_contexts_cpu % cpu utilization +0.03 [-1.49, +1.55] 1 (metrics) (profiles) (logs)
dsd_uds_1mb_3k_contexts_throughput ingress throughput +0.00 [-0.05, +0.06] 1 (metrics) (profiles) (logs)
dsd_uds_100mb_3k_contexts_throughput ingress throughput +0.00 [-0.03, +0.03] 1 (metrics) (profiles) (logs)
dsd_uds_512kb_3k_contexts_throughput ingress throughput +0.00 [-0.05, +0.05] 1 (metrics) (profiles) (logs)
dsd_uds_10mb_3k_contexts_throughput ingress throughput -0.02 [-0.20, +0.17] 1 (metrics) (profiles) (logs)
dsd_uds_10mb_3k_contexts_memory memory utilization -0.06 [-0.23, +0.10] 1 (metrics) (profiles) (logs)
quality_gates_rss_dsd_ultraheavy memory utilization -0.07 [-0.20, +0.05] 1 (metrics) (profiles) (logs)
dsd_uds_500mb_3k_contexts_memory memory utilization -0.09 [-0.24, +0.06] 1 (metrics) (profiles) (logs)
dsd_uds_1mb_3k_contexts_memory memory utilization -0.15 [-0.30, +0.01] 1 (metrics) (profiles) (logs)
quality_gates_rss_dsd_heavy memory utilization -0.20 [-0.33, -0.06] 1 (metrics) (profiles) (logs)
otlp_ingest_metrics_adp memory utilization -0.24 [-0.45, -0.03] 1 (metrics) (profiles) (logs)
dsd_uds_500mb_3k_contexts_throughput ingress throughput -0.27 [-0.41, -0.13] 1 (metrics) (profiles) (logs)
dsd_uds_100mb_3k_contexts_cpu % cpu utilization -0.79 [-7.20, +5.62] 1 (metrics) (profiles) (logs)

Bounds Checks: ✅ Passed

perf experiment bounds_check_name replicates_passed links
quality_gates_rss_dsd_heavy memory_usage 10/10 (metrics) (profiles) (logs)
quality_gates_rss_dsd_low memory_usage 10/10 (metrics) (profiles) (logs)
quality_gates_rss_dsd_medium memory_usage 10/10 (metrics) (profiles) (logs)
quality_gates_rss_dsd_ultraheavy memory_usage 10/10 (metrics) (profiles) (logs)
quality_gates_rss_idle memory_usage 10/10 (metrics) (profiles) (logs)

Explanation

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

Replicate Execution Details

We run multiple replicates for each experiment/variant. However, we allow replicates to be automatically retried if there are any failures, up to 8 times, at which point the replicate is marked dead and we are unable to run analysis for the entire experiment. We call each of these attempts at running replicates a replicate execution. This section lists all replicate executions that failed due to the target crashing or being oom killed.

Note: In the below tables we bucket failures by experiment, variant, and failure type. For each of these buckets we list out the replicate indexes that failed with an annotation signifying how many times said replicate failed with the given failure mode. In the below example the baseline variant of the experiment named experiment_with_failures had two replicates that failed by oom kills. Replicate 0, which failed 8 executions, and replicate 1 which failed 6 executions, all with the same failure mode.

Experiment Variant Replicates Failure Logs Debug Dashboard
experiment_with_failures baseline 0 (x8) 1 (x6) Oom killed Debug Dashboard

The debug dashboard links will take you to a debugging dashboard specifically designed to investigate replicate execution failures.

❌ Retried Normal Replicate Execution Failures (non-profiling)

Experiment Variant Replicates Failure Debug Dashboard
dsd_uds_512kb_3k_contexts_memory comparison 7 Failed to shutdown when requested Debug Dashboard
quality_gates_rss_dsd_heavy comparison 6, 4 Failed to shutdown when requested Debug Dashboard
quality_gates_rss_dsd_ultraheavy baseline 0 Failed to shutdown when requested Debug Dashboard

@dd-octo-sts dd-octo-sts bot added the area/ci CI/CD, automated testing, etc. label Jan 23, 2026
@pr-commenter
Copy link

pr-commenter bot commented Jan 23, 2026

Binary Size Analysis (Agent Data Plane)

Target: def4de1 (baseline) vs 73bbf7f (comparison) diff
Baseline Size: 361.64 MiB
Comparison Size: 361.64 MiB
Size Change: +0 B (+0.00%)
Pass/Fail Threshold: +5%
Result: PASSED ✅

Changes by Module

Module File Size Symbols
anon.6963da505859e1c5d18d13cd64b99327.1.llvm.12653801665776546318 +130 B 1
anon.6963da505859e1c5d18d13cd64b99327.1.llvm.17802331769623669269 -130 B 1
anon.6963da505859e1c5d18d13cd64b99327.4.llvm.12653801665776546318 +115 B 1
anon.6963da505859e1c5d18d13cd64b99327.4.llvm.17802331769623669269 -115 B 1
anon.6963da505859e1c5d18d13cd64b99327.3.llvm.12653801665776546318 +109 B 1
anon.6963da505859e1c5d18d13cd64b99327.3.llvm.17802331769623669269 -109 B 1
anon.6963da505859e1c5d18d13cd64b99327.0.llvm.12653801665776546318 +97 B 1
anon.6963da505859e1c5d18d13cd64b99327.0.llvm.17802331769623669269 -97 B 1
anon.6963da505859e1c5d18d13cd64b99327.2.llvm.12653801665776546318 +95 B 1
anon.6963da505859e1c5d18d13cd64b99327.2.llvm.17802331769623669269 -95 B 1

Detailed Symbol Changes

    FILE SIZE        VM SIZE    
 --------------  -------------- 
  [NEW]    +130  [NEW]     +40    anon.6963da505859e1c5d18d13cd64b99327.1.llvm.12653801665776546318
  [NEW]    +115  [NEW]     +25    anon.6963da505859e1c5d18d13cd64b99327.4.llvm.12653801665776546318
  [NEW]    +109  [NEW]     +19    anon.6963da505859e1c5d18d13cd64b99327.3.llvm.12653801665776546318
  [NEW]     +97  [NEW]      +7    anon.6963da505859e1c5d18d13cd64b99327.0.llvm.12653801665776546318
  [NEW]     +95  [NEW]      +5    anon.6963da505859e1c5d18d13cd64b99327.2.llvm.12653801665776546318
  [DEL]     -95  [DEL]      -5    anon.6963da505859e1c5d18d13cd64b99327.2.llvm.17802331769623669269
  [DEL]     -97  [DEL]      -7    anon.6963da505859e1c5d18d13cd64b99327.0.llvm.17802331769623669269
  [DEL]    -109  [DEL]     -19    anon.6963da505859e1c5d18d13cd64b99327.3.llvm.17802331769623669269
  [DEL]    -115  [DEL]     -25    anon.6963da505859e1c5d18d13cd64b99327.4.llvm.17802331769623669269
  [DEL]    -130  [DEL]     -40    anon.6963da505859e1c5d18d13cd64b99327.1.llvm.17802331769623669269
  [ = ]       0  [ = ]       0    TOTAL

@dd-octo-sts dd-octo-sts bot removed the area/ci CI/CD, automated testing, etc. label Jan 23, 2026
@dd-octo-sts dd-octo-sts bot added the area/ci CI/CD, automated testing, etc. label Jan 23, 2026
@tobz tobz force-pushed the tobz/templated-smp-experiments-20260123 branch from 01e9854 to c900e84 Compare January 24, 2026 00:31
@tobz tobz merged commit 3986a8b into main Jan 24, 2026
54 of 55 checks passed
@tobz tobz deleted the tobz/templated-smp-experiments-20260123 branch January 24, 2026 04:49
dd-octo-sts bot pushed a commit that referenced this pull request Jan 24, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/ci CI/CD, automated testing, etc. area/test All things testing: unit/integration, correctness, SMP regression, etc. type/chore Updates to dependencies or general "administrative" tasks necessary to maintain the codebase/repo.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants