Skip to content

Commit

Permalink
...
Browse files Browse the repository at this point in the history
  • Loading branch information
safl committed Apr 3, 2015
1 parent 44683c8 commit 785a92a
Show file tree
Hide file tree
Showing 18 changed files with 142 additions and 44 deletions.
6 changes: 3 additions & 3 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,13 +20,13 @@ Fire up your terminal, and::
# Source environment vars
source util/setbpenv.bash

You now have the Benchpress commands, ``bp_run``, ``bp_times``, ``bp_info``, ``bp_compile``, and ``bp_grapher`` ready at your finger-tips along with all the benchmarks and suites.
You now have the Benchpress commands, ``bp-run``, ``bp-times``, ``bp-info``, ``bp-compile``, and ``bp-grapher`` ready at your finger-tips along with all the benchmarks and suites.

Go ahead and run the `numpy_only` suite, executing each benchmark in the suite twice::

bp_run --no-perf --no-time --runs 2 --output my_run.json suites/numpy_only.py
bp-run --no-perf --no-time --runs 2 --output my_run.json suites/numpy_only.py

The above will store results from the run in the file `my_run.json`. You can inspect the elapsed wall-clock by executing::

bp_times my_run.json
bp-times my_run.json

2 changes: 1 addition & 1 deletion benchmarks/heat_equation/cpp11_opencl/issues.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
Two known issues::

* Implementation compiles (with warning) but execution is untested.
* Implementation does not use bp_util for argparsing and timing, getting it to run in a suite might be cumbersome...
* Implementation does not use ``bp-util`` for argparsing and timing, getting it to run in a suite might be cumbersome...

File renamed without changes.
4 changes: 2 additions & 2 deletions bin/bp_grapher → bin/bp-grapher
Original file line number Diff line number Diff line change
Expand Up @@ -5,19 +5,19 @@ import pprint
import os
from benchpress.gen_graphs import main
from benchpress import result_parser
import benchpress.grapher

if __name__ == "__main__":

graph_types = {} # Auto-load all graph-modules from grapher/*
# The graph-class must be a capilized version of the
# filename
for _, module, _ in pkgutil.iter_modules(['benchpress'+os.sep+'grapher']):
for _, module, _ in pkgutil.iter_modules([os.path.dirname(benchpress.grapher.__file__)]):
if module == 'graph':
continue

module_caps = module.capitalize()
m = __import__("benchpress.grapher.%s" % module, globals(), locals(), [module_caps], -1)

graph_types[module] = m.__dict__[module_caps]

parser = argparse.ArgumentParser(
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
2 changes: 1 addition & 1 deletion doc/source/install.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ The following shows how to do a user-mode / local installation::

pip install benchpress --user

Extend your ``$PATH``, such that the commands (`bp_info`, `bp_run`, `bp_times`, `bp_compile`, `and bp_grapher`) are readily available::
Extend your ``$PATH``, such that the commands (`bp-info`, `bp-run`, `bp-times`, `bp-compile`, `and bp-grapher`) are readily available::

export PATH=$PATH:$HOME/.local/bin

Expand Down
6 changes: 3 additions & 3 deletions doc/source/quickstart.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,13 +12,13 @@ Fire up your terminal, and::
# Source environment vars
source util/setbpenv.bash

You now have the Benchpress commands, ``bp_run``, ``bp_times``, ``bp_info``, ``bp_compile``, and ``bp_grapher`` ready at your finger-tips along with all the benchmarks and suites.
You now have the Benchpress commands, ``bp-run``, ``bp-times``, ``bp-info``, ``bp-compile``, and ``bp-grapher`` ready at your finger-tips along with all the benchmarks and suites.

Go ahead and run the `numpy_only` suite, executing each benchmark in the suite twice::

bp_run --no-perf --no-time --runs 2 --output my_run.json suites/numpy_only.py
bp-run --no-perf --no-time --runs 2 --output my_run.json suites/numpy_only.py

The above will store results from the run in the file `my_run.json`. You can inspect the elapsed wall-clock by executing::

bp_times my_run.json
bp-times my_run.json

25 changes: 12 additions & 13 deletions doc/source/usage.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,29 +2,28 @@
Usage
=====

bp_info
bp-info
-------

...

bp_compile
----------
.. literalinclude:: usage_bp-info.rst

...

bp_run
bp-run
------

...
.. literalinclude:: usage_bp-run.rst

bp_times
bp-times
--------

...
.. literalinclude:: usage_bp-times.rst

bp_grapher
bp-grapher
----------

...
.. literalinclude:: usage_bp-grapher.rst

bp-compile
----------

...

25 changes: 25 additions & 0 deletions doc/source/usage_bp-grapher.out
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
usage: bp-grapher [-h] [--output OUTPUT] [--postfix POSTFIX]
[--formats FORMATS [FORMATS ...]]
[--type {bypass_bwd,bypass_overhead,scale,bypass_latency,bypass_bwo,daily,npbackend,cluster,cpu,absolute}]
[--warmups WARMUPS] [--baseline BASELINE]
[--order ORDER [ORDER ...]] [--ylimit YLIMIT]
results

Generate different types of graphs.

positional arguments:
results Path to benchmark results.

optional arguments:
-h, --help show this help message and exit
--output OUTPUT Where to store generated graphs.
--postfix POSTFIX Append this to the filename of the generated graph(s).
--formats FORMATS [FORMATS ...]
Output file-format(s) of the generated graph(s).
--type {bypass_bwd,bypass_overhead,scale,bypass_latency,bypass_bwo,daily,npbackend,cluster,cpu,absolute}
The type of graph to generate
--warmups WARMUPS Specify the amount of samples from warm-up rounds.
--baseline BASELINE Baseline label for relative graphs.
--order ORDER [ORDER ...]
Ordering of the ticks.
--ylimit YLIMIT Max value of the y-axis
15 changes: 15 additions & 0 deletions doc/source/usage_bp-info.out
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
usage: bp-info [-h] [--mod] [--mod_parent] [--all] [--docsrc] [--benchmarks]
[--suites] [--hooks] [--commands]

Retrieve misc. info on Benchpress.

optional arguments:
-h, --help show this help message and exit
--mod Location of the Python module
--mod_parent Location of the Python module parent
--all Show all paths
--docsrc Location of the documentation source
--benchmarks Location of benchmarks
--suites Location of suites
--hooks Location of hooks
--commands Location of commands
43 changes: 43 additions & 0 deletions doc/source/usage_bp-run.out
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
usage: bp-run [-h] [--output RESULT_FILE] [--runs RUNS] [--no-perf]
[--no-time] [--save-data] [--pre-clean] [--restart]
[--publish-cmd COMMAND] [--slurm] [--no-slurm]
[--partition PARTITION] [--multi-jobs] [--wait]
bohrium_src suite_file

Runs a benchmark suite and stores the results in a json-file.

positional arguments:
bohrium_src Path to the Bohrium source-code.
suite_file Path to the benchmark suite file.

optional arguments:
-h, --help show this help message and exit
--output RESULT_FILE Path to the JSON file where the benchmark results will
be written. If the file exist, the benchmark will
resume.
--runs RUNS How many times should each benchmark run.
--no-perf Disable the use of the perf measuring tool.
--no-time Disable the use of the '/usr/bin/time -v' measuring
tool.
--save-data Save data output from benchmarks in RESULT_FILE. All
benchmarks must support the --outputfn argument.
--pre-clean Clean caches such as the fuse or the kernel cache
before execution.
--restart Restart execution or submission of failed jobs.
--publish-cmd COMMAND
The publish command to use before exiting (use
together with --wait). NB: $OUT is replaced with the
name of the output JSON file.

SLURM Queuing System:
--slurm Use the SLURM queuing system. This overwrite the
default value specified in the suite
('use_slurm_default')
--no-slurm Do not use the SLURM queuing system. This overwrite
the default value specified in the suite
('use_slurm_default')
--partition PARTITION
Submit to a specific SLURM partition.
--multi-jobs Submit 'runs' SLURM jobs instead of one job with
'runs' number of runs.
--wait Wait for all SLURM jobs to finished before returning.
12 changes: 12 additions & 0 deletions doc/source/usage_bp-times.out
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
usage: bp-times [-h] [--printer {troels,datadiff,times,raw,csv,parsed}]
[--baseline BASELINE]
results

positional arguments:
results JSON file containing results

optional arguments:
-h, --help show this help message and exit
--printer {troels,datadiff,times,raw,csv,parsed}
How to print results.
--baseline BASELINE Set a baseline run.
10 changes: 5 additions & 5 deletions module/setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,11 +34,11 @@ def make_dfiles(prefix, directory):
make_dfiles('share/benchpress', paths['suites']),
packages = ['benchpress'],
scripts = [
os.sep.join([paths["commands"], "bp_info"]),
os.sep.join([paths["commands"], "bp_run"]),
os.sep.join([paths["commands"], "bp_times"]),
os.sep.join([paths["commands"], "bp_grapher"]),
os.sep.join([paths["commands"], "bp_compile"]),
os.sep.join([paths["commands"], "bp-info"]),
os.sep.join([paths["commands"], "bp-run"]),
os.sep.join([paths["commands"], "bp-times"]),
os.sep.join([paths["commands"], "bp-grapher"]),
os.sep.join([paths["commands"], "bp-compile"]),
os.sep.join([paths["hooks"], "proxy-VEM-pre-hook.sh"])
]
)
26 changes: 13 additions & 13 deletions suites/default.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,23 +16,23 @@
#

# Python
dython_numpy = ('Dython/NP', 'dython `bp_info --benchmarks`/{script}/python_numpy/{script}.py {args}', None)
python_numpy = ('Python/NP', 'python `bp_info --benchmarks`/{script}/python_numpy/{script}.py {args}', None)
python_bohrium = ('Python/BH', 'python -m bohrium `bp_info --benchmarks`/{script}/python_numpy/{script}.py --bohrium=True {args}', None)
dython_numpy = ('Dython/NP', 'dython `bp-info --benchmarks`/{script}/python_numpy/{script}.py {args}', None)
python_numpy = ('Python/NP', 'python `bp-info --benchmarks`/{script}/python_numpy/{script}.py {args}', None)
python_bohrium = ('Python/BH', 'python -m bohrium `bp-info --benchmarks`/{script}/python_numpy/{script}.py --bohrium=True {args}', None)

# C
c99_seq = ('C/SEQ', '`bp_info --benchmarks`/{script}/c99_seq/bin/{script} {args}', None)
c99_omp = ('C/OMP', '`bp_info --benchmarks`/{script}/c99_omp/bin/{script} {args}', None)
c99_omp_mpi = ('C/OMP_MPI', 'mpirun `bp_info --benchmarks`/{script}/c99_omp_mpi/bin/{script} {args}', None)
c99_seq = ('C/SEQ', '`bp-info --benchmarks`/{script}/c99_seq/bin/{script} {args}', None)
c99_omp = ('C/OMP', '`bp-info --benchmarks`/{script}/c99_omp/bin/{script} {args}', None)
c99_omp_mpi = ('C/OMP_MPI', 'mpirun `bp-info --benchmarks`/{script}/c99_omp_mpi/bin/{script} {args}', None)

# C++
cpp11_seq = ('CPP/SEQ', '`bp_info --benchmarks`/{script}/cpp11_seq/bin/{script} {args}', None)
cpp11_omp = ('CPP/OMP', '`bp_info --benchmarks`/{script}/cpp11_omp/bin/{script} {args}', None)
cpp11_arma = ('CPP/Arma', '`bp_info --benchmarks`/{script}/cpp11_armadillo/bin/{script} {args}', None)
cpp11_blitz = ('CPP/Blitz', '`bp_info --benchmarks`/{script}/cpp11_blitz/bin/{script} {args}', None)
cpp11_eigen = ('CPP/Eigen', '`bp_info --benchmarks`/{script}/cpp11_eigen/bin/{script} {args}', None)
cpp11_boost = ('CPP/Boost', '`bp_info --benchmarks`/{script}/cpp11_boost/bin/{script} {args}', None)
cpp11_bxx = ('CPP/BH', '`bp_info --benchmarks`/{script}/cpp11_bxx/bin/{script} {args}', None)
cpp11_seq = ('CPP/SEQ', '`bp-info --benchmarks`/{script}/cpp11_seq/bin/{script} {args}', None)
cpp11_omp = ('CPP/OMP', '`bp-info --benchmarks`/{script}/cpp11_omp/bin/{script} {args}', None)
cpp11_arma = ('CPP/Arma', '`bp-info --benchmarks`/{script}/cpp11_armadillo/bin/{script} {args}', None)
cpp11_blitz = ('CPP/Blitz', '`bp-info --benchmarks`/{script}/cpp11_blitz/bin/{script} {args}', None)
cpp11_eigen = ('CPP/Eigen', '`bp-info --benchmarks`/{script}/cpp11_eigen/bin/{script} {args}', None)
cpp11_boost = ('CPP/Boost', '`bp-info --benchmarks`/{script}/cpp11_boost/bin/{script} {args}', None)
cpp11_bxx = ('CPP/BH', '`bp-info --benchmarks`/{script}/cpp11_bxx/bin/{script} {args}', None)

# C#

Expand Down
2 changes: 1 addition & 1 deletion suites/numpy_only.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
suites = [
{
'bridges': [
('NumPy', 'taskset -c 0 python `bp_info --benchmarks`/{script}/python_numpy/{script}.py {args}', None)
('NumPy', 'taskset -c 0 python `bp-info --benchmarks`/{script}/python_numpy/{script}.py {args}', None)
],
'scripts': [
('Black Scholes', 'black_scholes', '--size=1000000*10'),
Expand Down
8 changes: 6 additions & 2 deletions util/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@
#
# cd ../ && source util/setbpenv.bash
#
DOCSRC := $(shell bp_info --docsrc)
MOD_PARENT := $(shell bp_info --mod_parent)
DOCSRC := $(shell bp-info --docsrc)
MOD_PARENT := $(shell bp-info --mod_parent)

# Install / uninstall using pip
install:
Expand All @@ -20,6 +20,10 @@ uninstall:
# Generate sphinx doc
docs:
cd $(DOCSRC) && ./autodoc_benchmarks.py > source/benchmarks.rst && make html
cd $(DOCSRC) && bp-info -h > source/usage_bp-info.out
cd $(DOCSRC) && bp-run -h > source/usage_bp-run.out
cd $(DOCSRC) && bp-times -h > source/usage_bp-times.out
cd $(DOCSRC) && bp-grapher -h > source/usage_bp-grapher.out

# Upload to PyPi
upload:
Expand Down

0 comments on commit 785a92a

Please sign in to comment.