New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Difference between asv dev
and asv run
(due to OpenMP)
#671
Comments
I just saw that |
`asv dev` is the same as `asv run --quick` which just sets
`repeat = number = 1` so the benchmark gets run once.
You can try different values of repeat and number in Ipython e.g. with
`%timeit -n3 -r4` .
|
You can also check asv 0.3.dev which reports also the statistics.
If you are running e.g. on a laptop, average of multiple runs can
easily be larger than a single run, if the CPU decides to shift its
frequency down under load etc. various possibilities.
|
In addition, maybe you get a different liner algebra library (e.g. MKL
vs openblas) in the new environment created by `asv run`.
|
Basically, as usual, in the absence of a reproducible example little
can be said.
|
Thanks @pv. Running Not running on a laptop, but on an Intel NUC, which might behave similar indeed.
I hope not. I think in both cases it uses |
If `asv run --quick` gives the same result as `asv dev` then it's not
environment issue.
A remaining question then is whether you managed to reproduce it with
the -n and -r options to %timeit?
|
Nope. This is the default output:
So the times agrees with There seems to be another issue: If I run |
Is it possible to limit the number of threads for |
`asv` doesn't know about threads or use them --- the only difference
between `asv run --quick` and `asv run` is with whether
`timer = timeit.Timer(...); results = timer.repeat(number, repeat)` is
run with `number = repeat = 1` or with other parameters.
For asv development, the more interesting question is what asv 0.3.dev
from the git master branch says. (0.2.2 will only fixs a few selected
bugs, doesn't change benchmark methodology.)
|
I'll give it a shot with |
For asv 0.2.x, you can also try setting e.g. `number = 1` and
`repeat = 5` as benchmark attributes, to disable their automatic
selection --- if that helps, there is/was a problem with the automatic
selection.
|
If I set |
...and the CPU usage does not explode... |
With
Here again
|
Finally, to check again for the environment effects, you can do If you don't get the strange bigger timings with asv 0.2.x for any combination of |
If I put I run the whole benchmark of my project now (https://github.com/empymod/asv), paying close attention to the CPU. Almost the entire benchmark runs on 1 thread (25% of CPU usage), as you say, |
Intel MKL is multithreaded, so that's one possibility, and perhaps it tries to enable them in a "smart" way that's problematic here, so you can try with |
If I set |
There we go. Setting in bash Non sure if this is something that should be mentioned in the docs or set by |
asv dev
and asv run
asv dev
and asv run
(due to OpenMP)
There could be a FAQ on setting up the benchmarking environment, as it does have this sort of caveats. |
That could help. I guess the |
I run into a weird issue which I cannot track down (using
asv v0.2.1
), so I thought I'd ask here if it is a known or common problem.If I run a simple test using
asv dev
with a parameter that can take three different values, I get the following output:These times agree with the times I get if I run the tests simply with
%timeit
in an IPython console.Now if I run
asv run
instead ofasv dev
,the result stored in
b804477a-conda-py3.6.json
is:The times vastly differ between the
asv dev
and theasv run
results. Any idea why that could be?The text was updated successfully, but these errors were encountered: