Skip to content

A simple and yet powerful python benchmark framework. Write unit benchs just like you'd write unit tests.

License

Notifications You must be signed in to change notification settings

oleiade/Hurdles

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

40 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Overview

A simple yet powerful python benchmark framework. Write unit benchs just like you'd write unit tests.

It's not only about crossing the finish line, it's about finding the hurdles that to slow you down

Flattr this git repo

Usage

Writing Bench cases

Just like you'd write unittest, subclass the BenchCase class, and there you go.

Nota : hurdles will only consider file named following the pattern 'bench_*.py' and will only run against Bench classes which name starts with 'Bench'

from hurdles import BenchCase
from hurdles.tools import extra_setup


from hurdles.tools import extra_setup

class BenchMyClass(hurdles.BenchCase):
    def setUp(self):
        # Just like when unit testing
        # Set up some attributes here
        # that will be set up in each bench

    def tearDown(self):
        # Once again, just like unit testing
        # get rid of what you've set up and want
        # to be reset between each benchmark

    # Every benchmark method has to start with 'bench_'
    # to be run as a benchmark by hurdles
    def bench_this(self, *args, **kwargs):
        # Do some stuff that you'd wanna time here
        return [x for x in [0] * 100000]

    # hurdles.tools provides an extra_setup decorator
    # which provides the ability to run some setup code
    # outside the timed benchmark running, in order
    # to prepare some data, import some dependencies etc...
    # Once prepared, context is injected into kwargs.
    @extra_setup("from random import randint"
                 "r = [x for x in xrange(10000)]")
    def bench_that(self, *args, **kwargs):
        [randint(0, 1000) for x in kwargs['r']]

Running bench cases

Via Code

Running bench cases benchmarks can be made via the .run or .iter method. You can restrain benchmarks methods to be run at a BenchCase instanciation.

    B = BenchMyClass()  # will run every BenchMyClass benchmarks
    # or
    B = BenchMyClass(['bench_this'])  # will only run BenchMyClass 'bench_this' method

    # BenchCase class provides a .run method to run and print
    # on stdout benchmarks results.
    B.run()

    # and a .iter method which provides an iterator over benchmarks
    # methods.
    it = B.iter()
    [do_some_stuff(b) for b in it]

Just like unittest lib provides test cases suites, hurdles comes with bench cases suites.

suite = BenchSuite()
suite.add_benchcase(BenchMyCase())
suite.add_benchcase(BenchMyOtherCase(['bench_foo', 'bench_bar'])

suite.run()
Via Cmdline

Hurdles comes with a cmdline util to run your bench cases :

$ hurdles mybenchmarksfolder1/ mybenchmarksfolder2/ benchmarkfile1

Which will auto-detect your benchmark modules, and classes, and run them (uses a BenchSuite under the hood). the results should output on stdout like the following :

$ hurdles mybenchmarksfolder1/
BenchProof.bench_this
 | average   9.301 ms
 | median    8.445 ms
 | fastest   7.63 ms
 | slowest   13.25 ms
BenchProof.bench_that
 | average   13.126 ms
 | median    12.06 ms
 | fastest   11.68 ms
 | slowest   19.83 ms

------------------------------------------------------------ 
Ran 2 benchmarks 

Done.          

Note that hurdles also supports multi output formats:

  • csv
  • tsv
  • json
  • yaml
  • xls

To use them, just pass hurdles the -f option:

$ hurdles mybenchmarksfolder1/ -f tsv
benchcase.method    average median  fastest slowest
BenchProof.bench_this    5.315   3.5 3.41    21.02

BenchProof.bench_that    8.866   7.965   7.53    15.53

$ hurdles mybenchmarksfolder2/ -f json
- {average: 3.0940000000000003, benchcase.method: BenchProof.bench_this, fastest: 2.85,
  median: 3.04, slowest: 3.78}
- {average: 7.119999999999999, benchcase.method: BenchProof.bench_that, fastest: 6.76,
  median: 7.035, slowest: 8.0}

# You can also redirect output to a file
# by simply forwarding it to a file descriptor.
$ hurdles mybenchmarksfolder1/ -o tsv > result.tsv

For more examples, see examples included folder

About

A simple and yet powerful python benchmark framework. Write unit benchs just like you'd write unit tests.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages