test(app): add PR0.3 perf diagnostics#609
Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Plus Run ID: 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughThis PR adds perf baseline comment rendering and CLI output, wires comment creation/update into the perf comparator workflow (with conditional trace captures and job failure on regressions), makes Playwright trace mode env-configurable, and adds a terminal-side-panel baseline perf test. ChangesPerf Regression Detection with PR Comments and Traces
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related issues
Possibly related PRs
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Suggested priority: P2 (includes user-path files (packages/app/src/testing/perf-metrics.test.ts, packages/app/src/testing/perf-metrics.ts)).
P1/P0 are reserved for maintainer confirmation. Please relabel manually if this is a release blocker, security issue, data-loss risk, or updater/runtime failure.
There was a problem hiding this comment.
🧹 Nitpick comments (1)
.github/workflows/perf-probe-baseline.yml (1)
101-140: 💤 Low valueConsider extracting the marker constant to reduce duplication.
The marker string
"<!-- pawwork-perf-probe-baseline -->"is hardcoded here and also defined asPERF_COMMENT_MARKERinpackages/app/src/testing/perf-metrics.ts. While GitHub Actions can't directly import TypeScript constants, this duplication creates a maintenance risk if the marker changes.Consider documenting the coupling in a comment, or extracting the marker to a shared location (e.g., a shell variable at the workflow level that could be sourced from package.json or a config file).
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In @.github/workflows/perf-probe-baseline.yml around lines 101 - 140, The hardcoded marker "<!-- pawwork-perf-probe-baseline -->" in the GitHub Action (variable name marker in the script) duplicates PERF_COMMENT_MARKER from packages/app/src/testing/perf-metrics.ts; to fix, expose the marker via a single source consumed by the workflow (e.g., define PERF_COMMENT_MARKER as a workflow-level env variable or read it from package.json/config and set env: PERF_COMMENT_MARKER) and replace the inline string with process.env.PERF_COMMENT_MARKER (or document the coupling in a clear comment above the script if extraction is not feasible), ensuring you update references to the script's marker variable accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Nitpick comments:
In @.github/workflows/perf-probe-baseline.yml:
- Around line 101-140: The hardcoded marker "<!-- pawwork-perf-probe-baseline
-->" in the GitHub Action (variable name marker in the script) duplicates
PERF_COMMENT_MARKER from packages/app/src/testing/perf-metrics.ts; to fix,
expose the marker via a single source consumed by the workflow (e.g., define
PERF_COMMENT_MARKER as a workflow-level env variable or read it from
package.json/config and set env: PERF_COMMENT_MARKER) and replace the inline
string with process.env.PERF_COMMENT_MARKER (or document the coupling in a clear
comment above the script if extraction is not feasible), ensuring you update
references to the script's marker variable accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro Plus
Run ID: 7c0fd135-798f-4be6-90a0-e369058df44b
📒 Files selected for processing (6)
.github/workflows/perf-probe-baseline.ymlpackages/app/e2e/perf/perf-probe.spec.tspackages/app/playwright.config.tspackages/app/script/compare-perf.tspackages/app/src/testing/perf-metrics.test.tspackages/app/src/testing/perf-metrics.ts
There was a problem hiding this comment.
Code Review
This pull request introduces a new performance probe scenario for terminal side panel interactions and adds functionality to generate a markdown-formatted performance delta summary comment. The changes include updates to the Playwright configuration to enable tracing, a new test case in the performance probe suite, and helper functions for rendering performance comparison reports. I have reviewed the code and identified a potential issue regarding the terminal state reset in the new test case, as well as a readability improvement for the markdown table generation logic.
Perf delta summaryComparator: pass
|
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In @.github/workflows/perf-probe-baseline.yml:
- Around line 7-8: The workflow's on.pull_request.paths is missing
packages/app/playwright.config.ts so changes to that config can bypass the
perf-probe-baseline job; update the paths list used by the workflow (the
on.pull_request.paths block) to include "packages/app/playwright.config.ts"
alongside the existing "packages/app/src/**" and "packages/ui/src/**" entries so
config-only PRs trigger this workflow.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro Plus
Run ID: 580d62fa-3bd0-4f90-be67-02e65c5600e2
📒 Files selected for processing (1)
.github/workflows/perf-probe-baseline.yml
Summary
Add PR0.3 perf diagnostics without changing Area A behavior: a terminal side-panel perf scenario, failure-only Playwright trace reruns for comparator failures, and an upserted PR perf delta comment.
Why
PR0.2 can block regressions, but when the perf gate fails it still takes too much manual artifact digging to explain why. PR0.3 is the diagnostics layer: make failures replayable, make terminal/panel cost visible, and surface perf deltas directly on the PR.
Related Issue
Part of #600. Task anchor: task #78.
Human Review Status
Pending. A human should make the final merge decision after reviewing the final diff and verification evidence.
Review Focus
packages/app/e2e/perf/perf-probe.spec.ts: the newterminal-side-panel-openscenario measures the real terminal open path without expanding scope into terminal history interactions..github/workflows/perf-probe-baseline.ymlandpackages/app/playwright.config.ts: trace is only enabled for diagnostic reruns after comparator failure, so normal measurement numbers stay uncontaminated.packages/app/script/compare-perf.tsandpackages/app/src/testing/perf-metrics.ts: perf comment markdown is stable, marker-based, and upserts one PR comment instead of spamming new comments on every push.Risk Notes
Low. This PR only changes perf harness, CI diagnostics, and PR reporting. It does not change product behavior, thresholds, or Area A render architecture. The main risk is workflow complexity; normal measurement runs still avoid trace overhead by design.
How To Verify
Screenshots or Recordings
None. No visible UI changes.
Checklist
dev, and my PR title and commit messages use Conventional Commits in EnglishSummary by CodeRabbit
New Features
Chores
Tests