Skip to content

Commit

Permalink
Clarify some of the examples.
Browse files Browse the repository at this point in the history
  • Loading branch information
ionelmc committed Oct 23, 2015
1 parent 3731cd7 commit 8ec8c9d
Show file tree
Hide file tree
Showing 2 changed files with 87 additions and 22 deletions.
55 changes: 47 additions & 8 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -88,37 +88,75 @@ Installation
Documentation
=============

`pytest-benchmark.readthedocs.org <http://pytest-benchmark.readthedocs.org/en/stable/>`_
Available at: `pytest-benchmark.readthedocs.org <http://pytest-benchmark.readthedocs.org/en/stable/>`_.

Examples
========

This plugin provides a `benchmark` fixture. This fixture is a callable object that will benchmark
any function passed to it.
This plugin provides a `benchmark` fixture. This fixture is a callable object that will benchmark any function passed
to it.

Example:

.. code-block:: python
def something(duration=0.000001):
# Code to be measured
return time.sleep(duration)
"""
Function that needs some serious benchmarking.
"""
time.sleep(duration)
# You may return anything you want, like the result of a computation
return 123
def test_my_stuff(benchmark):
# benchmark something
result = benchmark(something)
# Extra code, to verify that the run completed correctly.
# Note: this code is not measured.
assert result is None
# Sometimes you may want to check the result, fast functions
# are no good if they return incorrect results :-)
assert result == 123
You can also pass extra arguments:

.. code-block:: python
def test_my_stuff(benchmark):
result = benchmark(time.sleep, 0.02)
benchmark(time.sleep, 0.02)
Or even keyword arguments:

.. code-block:: python
def test_my_stuff(benchmark):
benchmark(time.sleep, duration=0.02)
Another pattern seen in the wild, that is not recommended for micro-benchmarks (very fast code) but may be convenient:

.. code-block:: python
def test_my_stuff(benchmark):
@benchmark
def something(): # unnecessary function call
time.sleep(0.000001)
A better way is to just benchmark the final function:

.. code-block:: python
def test_my_stuff(benchmark):
benchmark(time.sleep, 0.000001) # way more accurate results!
If you need to do fine control over how the benchmark is run (like a `setup` function, exact control of `iterations` and
`rounds`) there's a special mode - pedantic_:

.. code-block:: python
def my_special_setup():
...
def test_with_setup(benchmark):
benchmark.pedantic(something, setup=my_special_setup, args=(1, 2, 3), kwargs={'foo': 'bar'}, iterations=10, rounds=100)
Screenshots
-----------
Expand Down Expand Up @@ -157,6 +195,7 @@ Credits

.. _FAQ: http://pytest-benchmark.readthedocs.org/en/latest/faq.html
.. _calibration: http://pytest-benchmark.readthedocs.org/en/latest/features.html#calibration
.. _pedantic: http://pytest-benchmark.readthedocs.org/en/latest/pedantic.html



Expand Down
54 changes: 40 additions & 14 deletions docs/usage.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,46 +2,70 @@
Usage
=====

This plugin provides a `benchmark` fixture. This fixture is a callable object that will benchmark
any function passed to it.
This plugin provides a `benchmark` fixture. This fixture is a callable object that will benchmark any function passed
to it.

Example:

.. code-block:: python
def something(duration=0.000001):
# Code to be measured
return time.sleep(duration)
"""
Function that needs some serious benchmarking.
"""
time.sleep(duration)
# You may return anything you want, like the result of a computation
return 123
def test_my_stuff(benchmark):
# benchmark something
result = benchmark(something)
# Extra code, to verify that the run completed correctly.
# Note: this code is not measured.
assert result is None
# Sometimes you may want to check the result, fast functions
# are no good if they return incorrect results :-)
assert result == 123
You can also pass extra arguments:

.. code-block:: python
def test_my_stuff(benchmark):
result = benchmark(time.sleep, 0.02)
benchmark(time.sleep, 0.02)
If you need to do some wrapping (like special setup), you can use it as a decorator around a wrapper function:
Or even keyword arguments:

.. code-block:: python
def test_my_stuff(benchmark):
benchmark(time.sleep, duration=0.02)
Another pattern seen in the wild, that is not recommended for micro-benchmarks (very fast code) but may be convenient:

.. code-block:: python
def test_my_stuff(benchmark):
@benchmark
def result():
# Code to be measured
return something(0.0002)
def something(): # unnecessary function call
time.sleep(0.000001)
# Extra code, to verify that the run completed correctly.
# Note: this code is not measured.
assert result is None
A better way is to just benchmark the final function:

.. code-block:: python
def test_my_stuff(benchmark):
benchmark(time.sleep, 0.000001) # way more accurate results!
If you need to do fine control over how the benchmark is run (like a `setup` function, exact control of `iterations` and
`rounds`) there's a special mode - pedantic_:

.. code-block:: python
def my_special_setup():
...
def test_with_setup(benchmark):
benchmark.pedantic(something, setup=my_special_setup, args=(1, 2, 3), kwargs={'foo': 'bar'}, iterations=10, rounds=100)
Commandline options
===================
Expand Down Expand Up @@ -174,3 +198,5 @@ pytest-benchmark[aspect]``):
benchmark.weave(Foo.internal, lazy=True):
f = Foo()
f.run()

.. _pedantic: http://pytest-benchmark.readthedocs.org/en/latest/pedantic.html

0 comments on commit 8ec8c9d

Please sign in to comment.