You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As discussed offline, we think there are a few improvements we could make in how we do performance benchmarks for H3:
use some standard benchmarking framework instead of rolling our own (which is what we currently do)
a framework could provide nice features like warm-ups, advanced stats (min vs avg), etc.
what options currently exist for C?
standardize the benchmark format to make it more machine-readable (like in Go), which would make it easier to automatically compare timings between diffs
avoid doing benchmarks on laptops and/or finding a dedicated benchmarking machine
random (but deterministic) inputs for improved input coverage
benchmark performance tracking/plotting over the timeline of diffs, like what is done in Python using asv for libraries like numpy
For context, this issue came up after seeing a lot of variance in the benchmarks I was running on my laptop for #496
The text was updated successfully, but these errors were encountered:
As discussed offline, we think there are a few improvements we could make in how we do performance benchmarks for H3:
asv
for libraries likenumpy
For context, this issue came up after seeing a lot of variance in the benchmarks I was running on my laptop for #496
The text was updated successfully, but these errors were encountered: