feat(experiments): Always compute timeseries for running experiments#57771
feat(experiments): Always compute timeseries for running experiments#57771jurajmajerik merged 1 commit intomasterfrom
Conversation
The timeseries chart was gated on `scheduling_config.timeseries=true`, which only two of the many experiment-creation paths actually set. Drop the gate (backend filters + frontend UI/write-sides) so all running experiments with metrics get timeseries.
|
🎭 Playwright didn't run on this PR — your changes touch code that could affect E2E behavior, but Playwright is opt-in via label now to keep CI cost down. Add the Most PRs don't need this. Real regressions still get caught on master and fix-forward. |
|
|
Size Change: 0 B Total Size: 148 MB ℹ️ View Unchanged
|
There was a problem hiding this comment.
This is a behavioral change that removes the scheduling_config.timeseries gate — all running experiments will now have timeseries computed, not just those explicitly opted in. The code changes are internally consistent and clearly intentional, but there are zero reviews and no confirmation that the team has assessed the potential load impact of suddenly processing all running experiments (including older ones without the flag). Request a review from someone on the experiments team before merging.

Problem
Timeseries computation is silently skipped on running experiments created through any path that doesn't set
scheduling_config.timeseries=true.The timeseries chart on metric confidence intervals is gated on
experiment.scheduling_config.timeseries === true. The feature has been GA for months, but only two of the many experiment-creation paths actually set the flag — the classic form and the wizard. The Feature Flag → Experiments tab "Create draft" card, the Max AI tool, MCP, the REST API, andweb_experimentsall skip it. Result: a steady stream of running experiments where the timeseries chart never appears and the temporal recalculation workflow ignores them.Changes
scheduling_config__timeseries=Truefrom both temporal activity filters that discover experiments needing recalculation (regular metrics + saved metrics).MetricRowGroup/MetricRowGroupTooltip— the "Click to view timeseries" hint and the click handler always render now.scheduling_config: { timeseries: true }and the now-unused property on theExperimenttype.The
scheduling_configJSONField on the model and serializer stays in place — harmless, and external callers still sending it won't break.How did you test this code?
Agent-authored; no manual testing. Automated checks:
posthog/temporal/experiments/test_recalculation_time_filter.py— 3/3 pass with the gate removed and the field stripped from the test helper.products/experiments/backend/test/test_experiment_service.py -k "scheduling_config or all_fields or default"— 7/7 pass.pnpm typescript:check— no errors in any touched file (full-repo run surfaces only pre-existing errors inllm_analytics,logs,tasks,visual_review,CupedModal).ruff checkon the touched Python files — clean.Publish to changelog?
no
🤖 Agent context
Driven by a question about a specific running production experiment (id 369732, team 2) that wasn't getting timeseries computed. Investigation surfaced the inconsistency: 6+ creation paths bypass the flag entirely, transitively poisoning duplicate/copy paths too.
Considered alternatives:
scheduling_config={"timeseries": True}inExperimentService.create_experiment— preserves opt-out capability, but no opt-out UI exists and nothing uses the field. Not worth keeping.