You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Do we still need to run the "Benchmark Tests" on PR approval? This triggers a 1h long CI job which adds CI fatigue on maintainers who might not be sure whether this check is important or not.
Also it looks like maintainers are now ignoring this check when merging PRs (since it's consistently failing), so this raises the question about it is really useful.
The text was updated successfully, but these errors were encountered:
We could just use GitHub actions instead of cml, as in jupyterlab/benchmarks#144. That supports markdown so if we generate a PNG we can base64 encode it.
Or maybe we could just post the result as a comment using gh?
Description
The "Benchmark Tests" CI check fails consistently when PRs are approved:
Reproduce
Example run: https://github.com/jupyterlab/jupyterlab/actions/runs/6043794173/job/16401409899?pr=15042
Expected behavior
The benchmark check should pass most of the time.
Context
Do we still need to run the "Benchmark Tests" on PR approval? This triggers a 1h long CI job which adds CI fatigue on maintainers who might not be sure whether this check is important or not.
Also it looks like maintainers are now ignoring this check when merging PRs (since it's consistently failing), so this raises the question about it is really useful.
The text was updated successfully, but these errors were encountered: