New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: number=1 or max_rounds #9
Comments
Quick question: how much does this function usually take? |
Purely depends on how many objects are set for the test, > 1 second at say 1000 objects. |
TODO note for me: this requires fixing the result storage in the inner loop (the |
No I don't think so, the case here does not have seperatable parts to measure. The use case is to provide an indicative value and put an alert in place should the test suddenly exceed a nominal time. Basically just need to be able to run the test once without calibration and averaging but take advantage of the pytest report integration. |
@gavingc I'm thinking about something like this: @pytest.mark.benchmark(max_rounds=1, calibration=False)
def test_a_thing_that_shouldnt_use_pytest_bechmark(benchmark):
benchmark(...) At least |
Looks the business 👍 |
After some consideration I'm implementing an API that allows this: def test_that_is_special(benchmark):
benchmark.manual(func, args=(...), kwargs={...}, setup=some_setup_func, iterations=1, rounds=1) This should cover what you asked for. |
Intense and looks wickedly flexible :-) |
Well give it a try. Last chance to change this before I release 3.0.0 On Tuesday, September 1, 2015, Gavin Kromhout notifications@github.com
Thanks, |
Ok trying, my head is a million miles away in another project so I might be missing something simple. I tried to install from github with pip into my project's virtualenv and got this: $env/bin/pip install git+https://github.com/ionelmc/pytest-benchmark.git Cleaning up... |
Ok, so after the above change benchmark appears to install. My test:
It appears that the side effect of running the test twice still occurs yes? Eyes falling out = bed time. |
@gavingc the install issue is causes by too old setuptools. Can you upgrade your setuptools? The example you pasted should only call sync_objects once. There is a test exactly for that: https://github.com/ionelmc/pytest-benchmark/blob/master/tests/test_manual.py#L4-L7 Can you show me how IntelliJ complains about the args (a screenshot or something would be nice)? |
OK! Essentially IntelliJ IDEA will warn about any unused parameters/vars since that is usually a mistake on the programmers part. |
Given: def test_bench(self, benchmark):
from time import sleep
def slp():
sleep(3)
benchmark.manual(slp, iterations=1, rounds=1) $./tests.py -k test_bench collected 142 items tests.py::TestServerUnit::test_bench PASSED ------- benchmark: 1 tests, min 5 rounds (of min 25.00us), 1.00s max time, timer: time.time -------- Name (time in s) Min Max Mean StdDev Median IQR Outliers(*) Rounds Iterations ---------------------------------------------------------------------------------------------------- test_bench 3.0031 3.0031 3.0031 0.0000 3.0031 0.0000 0;0 1 1 ---------------------------------------------------------------------------------------------------- (*) Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile. =========================================================================== 141 tests deselected by '-k test_bench' ============================================================================ =========================================================================== 1 passed, 141 deselected in 3.08 seconds =========================================================================== The only thing I can see that looks a little bit off is the: "min 5 rounds". |
Good catch. Maybe I should just remove the timing data from the header (doesn't really relate to the tests ran in that group - those are just default settings). |
FYI: there's gonna be a rename: |
Here be some docs: http://pytest-benchmark.readthedocs.org/en/latest/pedantic.html |
I really like it, fantastic work on the docs. For consideration perhaps unsafe options shouldn't have defaults and be optional. I think the implementation and options available will prompt me to improve the test too. |
The typical usecase that I had in mind was benchmarking slow side-effectful functions. For that situation On the other hand I don't want to require the users to type so much, and type everything :) If you do use the defaults and target a fast function then your results are going to look obviously wrong. |
I realise how much effort has gone into getting a reasonable average benchmark.
However I have just run into a use case where the unit under test must run exactly once.
It's not so much a benchmark as indicative.
The unit is inserting objects into a database (within a complex seq) so runs after the first are not representative.
A bit of an edge case I know.
For now I'm using:
t = timeit.timeit(sync_objects, number=1)
assert t < 1
The text was updated successfully, but these errors were encountered: