Open
Description
Currently, test_benchmarks.py
contains a single test that runs all criterion benchmarks. With increasing number of benchmarks, we thus increase the duration of this test, and need to adjust its timeout. Instead, we want to parameterize the test by the list of criterion benchmarks we have in the repository. This means introducing a fixture that yields the benchmarks listed by cargo bench --all -- --list
individually. Then the test itself would only run the benchmark it is provided. This should also make it easier to see which benchmarks are failing.
Originally posted by @pb8o in #4830 (comment)
Activity
MacOS commentedon Oct 18, 2024
If someone has not done it yet, I would take it over.
bchalios commentedon Oct 30, 2024
Hey @MacOS, please start working on it if you want.
cm-iwata commentedon Dec 21, 2024
@MacOS
CC: @bchalios
Are you still working on this?
If you are busy, may I handle it instead you?
MacOS commentedon Jan 3, 2025
I would submit something next week, however, I am not mad if you do it.
gjkeller commentedon Apr 10, 2025
@MacOS @cm-iwata @bchalios
I wanted to check and see if this issue was still being worked on. If not, would I be able to work on it?
cm-iwata commentedon Apr 10, 2025
@gjkeller
I'm waiting for a pull request review...
gjkeller commentedon Apr 11, 2025
@cm-iwata
No worries! I was unsure of if it was complete due to the failing tests, but seems like the failure was only due to timeouts and network errors. Good luck with getting your PR merged!
roypat commentedon Apr 23, 2025
Hi, sorry, I thought the latest update on the PR was that you were looking into why the CI was failing. Did you need help with anything? :o
cm-iwata commentedon Apr 27, 2025
@roypat
I have made some adjustments, so could you please re-run the CI and review it?