Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Manually trigger benchmark tests instead of running them on pull_request_review #15523

Merged
merged 2 commits into from
Dec 19, 2023

Conversation

jtpio
Copy link
Member

@jtpio jtpio commented Dec 12, 2023

References

Hoping to fix #15089

And progressively get CI to pass again consistently.

The benchmarks checks have been failing for many months on CI.

These jobs are run on PR approval, and may take a very long time to complete. Which means that in theory a maintainer would have to wait again for a new job to complete (and succeed) before hitting the merge button. Overall this adds a lot of maintenance fatigue and makes it difficult to merge PRs. To the point that it seems that it's just been ignored for the past months?

Code changes

Run the benchmark on schedule instead of pull_request_review to get these runs out of the review workflow.

User-facing changes

None

Backwards-incompatible changes

None

Copy link

Thanks for making a pull request to jupyterlab!
To try out this branch on binder, follow this link: Binder

@jtpio jtpio added this to the 4.1.0 milestone Dec 12, 2023
@krassowski
Copy link
Member

**> To the point that it seems that it's just been ignored for the past months?

More like only looked on manually for changes which could introduce memory leaks.

Though this does not fix the underlying issue.

@jtpio
Copy link
Member Author

jtpio commented Dec 12, 2023

Should we trigger these benchmarks when the special "please run benchmarks" sentence is used then? (like for updating Galata snapshots).

This would already help avoid running these workflows by default, and only when they are needed.

@krassowski
Copy link
Member

krassowski commented Dec 12, 2023

Maybe depending on label would be better? Currently benchmarks cover mostly the pkg:notebook, maybe a few select others?

Edit: actually both depending on label + option to run manually with "please run benchmarks"

@jtpio jtpio changed the title Run benchmark tests on schedule instead of pull_request_review Manually trigger benchmark tests instead of running them on pull_request_review Dec 12, 2023
@jtpio
Copy link
Member Author

jtpio commented Dec 13, 2023

Maybe depending on label would be better? Currently benchmarks cover mostly the pkg:notebook, maybe a few select others?

Yes I guess that can make sense. Although since these benchmarks take a while a run, maybe a maintainer would be interested in running them only at different points in time, not necessarily for each commit?

@jtpio
Copy link
Member Author

jtpio commented Dec 19, 2023

Thanks @krassowski for the review!

Merging this PR as an incremental improvement towards #15089. We can look into fixing the benchmark workflow and add more triggers as part of #15089.

@jtpio jtpio merged commit 8c5aa3a into jupyterlab:main Dec 19, 2023
77 of 78 checks passed
@jtpio jtpio deleted the benchmark-runs branch December 19, 2023 08:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Broken benchmark check on CI
2 participants