Build: Post bench reports as Semantic Performance Bot#147
Merged
Conversation
Swap the comment job and the history-archive job from `GITHUB_TOKEN` to
an installation token minted via `actions/create-github-app-token@v1`,
driven by the two secrets already set up in repo settings:
- SEMANTIC_PERF_BOT_APP_ID
- SEMANTIC_PERF_BOT_PRIVATE_KEY
Same permission surface as before — Pull requests (write), Contents
(write), Actions (read). Branded identity, not widened scope.
Effect downstream:
- PR bench comments post as the bot with its uploaded SUI-themed avatar
instead of the generic `github-actions[bot]` face.
- Archival commits on main are authored by the bot, so git blame /
history listings clearly show which chunk of main was machine-
authored bench bookkeeping vs. developer work.
Commit-author fields on the archive step use the
`<app-id>+<app-slug>[bot]@users.noreply.github.com` format. Slug
assumed to be `semantic-performance-bot` — if the app was registered
under a different slug, that's a one-line fix in this file. GitHub
still attributes the commit to the app regardless, the slug just
affects the display email and avatar linkage.
Also bundles `docs/public/images/{heap,performance}-avatar.png` staged
alongside — the avatar assets land with the workflow change that uses
them.
Acceptance test: next PR that touches `packages/**` after this merges
posts its bench comment under the bot's identity. One-commit revert
restores `GITHUB_TOKEN` if anything's off.
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
…l out Two corrections to the pull_request path filter now that we've exercised the workflow enough to see the gaps: - Add `tools/bench-reporter/**`. A PR that modifies only reporter.js or append-history.js previously didn't trigger benchmarks, yet those scripts run from the PR-head checkout when the report workflow fires, so the change DOES take effect on the PR's own comment. Missed coverage. - Drop `.github/workflows/benchmarks-report.yml`. This workflow file is the `workflow_run` handler — GitHub runs it from main's copy, not the PR's, which means a PR that modifies only this file cannot validate its own change inline anyway. Triggering benchmarks on such a PR wastes ~10 min of CI without producing actionable signal. `benchmarks.yml` itself stays in the filter — it's the `pull_request` entry point and GitHub runs it from PR-head YAML, so self-validation works.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Threads an installation token from the newly-registered Semantic Performance Bot GitHub App through both jobs in `benchmarks-report.yml`. Same permission surface as `GITHUB_TOKEN`; branded identity, not widened scope.
What changes
Secrets consumed
Two, both already configured:
One thing to verify
The archival commit's author email uses the slug `semantic-performance-bot` in the `+[bot]@users.noreply.github.com` format. If the app was registered under a different slug, that's a one-line fix in `benchmarks-report.yml`. GitHub still attributes the commit to the app either way — the slug only affects the rendered author email + avatar linkage in the commit view.
Acceptance test
After this merges, the next PR that touches `packages/**` posts its bench comment under the bot's identity and avatar. One-commit revert restores `GITHUB_TOKEN` if anything's off.
Bundled
`docs/public/images/{heap,performance}-avatar.png` — the avatar assets that land with the workflow that uses them.