You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Most suites have a natural increase in complexity and compute-time needed to finish, e.g. Robot Coordination; as you increase the grid-size, the computation grows exponentially.
Sorting the tests in a suite is a good first step to balance where time is spent checking performance. There is no need to start at the most expensive tests. However, when comparing search strategies, we don't know when to switch.
Currently, each strategy runs to completion on a suite, before progressing to the next strategy. Running each tests for all strategies before advancing could be good. This is just switching the order of iteration. However, then we need to load intermediate CSVs for strategies, when these resume benching. Indexing into and saving upon test completionpds[STRATEGIES.length] is probably also fine.
We also have a timeout set per bench. Since we iterate over six different thread-counts, this has an upper bound of taking 12 hours. While we have taken pains to avoid noise in results, for longer running benchmarks allowing parallelism might be fine.
The text was updated successfully, but these errors were encountered:
Most suites have a natural increase in complexity and compute-time needed to finish, e.g. Robot Coordination; as you increase the grid-size, the computation grows exponentially.
Sorting the tests in a suite is a good first step to balance where time is spent checking performance. There is no need to start at the most expensive tests. However, when comparing search strategies, we don't know when to switch.
Currently, each strategy runs to completion on a suite, before progressing to the next strategy. Running each tests for all strategies before advancing could be good. This is just switching the order of iteration. However, then we need to load intermediate CSVs for strategies, when these resume benching. Indexing into and saving upon test completion
pds[STRATEGIES.length]
is probably also fine.We also have a timeout set per bench. Since we iterate over six different thread-counts, this has an upper bound of taking 12 hours. While we have taken pains to avoid noise in results, for longer running benchmarks allowing parallelism might be fine.
The text was updated successfully, but these errors were encountered: