You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Running the same test, on the same dedicated VM, two days apart gives the following runtime tables:
At the moment, there is too much noise in the times to make any judgement from these tables. In order to improve this, we should:
run the benchmarks a number of times (five by default, but make this a control parameter), and discard the highest and lowest1
consider using the timeit module, instead of time.clock, as it's claimed to be more robust at benchmarking, especially across platforms. This will do the averaging in the previous point naturally see docs.
1 some minimizers, e.g., the Differential Evolution (de ) minimizer in bumps, will be stochastic, and will naturally give different runtimes. This should be noted somewhere in the table.
The text was updated successfully, but these errors were encountered:
Running the same test, on the same dedicated VM, two days apart gives the following runtime tables:
![180919](https://user-images.githubusercontent.com/13645545/65671050-73fc0080-e03e-11e9-8a71-c09150ce52f6.png)
![190919](https://user-images.githubusercontent.com/13645545/65671060-76f6f100-e03e-11e9-99f7-635af981579f.png)
At the moment, there is too much noise in the times to make any judgement from these tables. In order to improve this, we should:
timeit
module, instead oftime.clock
, as it's claimed to be more robust at benchmarking, especially across platforms. This will do the averaging in the previous point naturally see docs.1 some minimizers, e.g., the Differential Evolution (
de
) minimizer in bumps, will be stochastic, and will naturally give different runtimes. This should be noted somewhere in the table.The text was updated successfully, but these errors were encountered: