New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce performance benchmarking suite #824

Merged
merged 3 commits into from Jun 14, 2018

Conversation

Projects
None yet
2 participants
@patiences
Contributor

patiences commented May 29, 2018

In the coming weeks, as part of my GSoC project, there will be changes that aim to improve the performance of the code and so we need a way to evaluate that. This is a first PR with some example tests.
This test is not intended to be run like the other unit tests, rather it can be run from the command line and produces an output like

Running test_small_integers ...
  Elapsed time:  37.10159900219878  ms
  CPU process time:  274.72000000000025  ms
Running test_booleans ...
  Elapsed time:  18.298730996320955  ms
  CPU process time:  181.10000000000014  ms
transpiler.transpile_string("test.py", main_code)
if extra_code:
for name, code in extra_code.items():
transpiler.transpile_string("%s.py" % name.replace('.', os.path.sep), adjust(code))
t1_stop = time.perf_counter()
t2_stop = time.process_time()

This comment has been minimized.

@freakboy3742

freakboy3742 May 30, 2018

Member

This won't be benchmarking what you think it is (or, at least, it's not benchmarking the thing that needs to be benchmarked. These timing calls are wrapped around the transpilation process - which is the process of converting Python code to Java bytecode. While this is something that certainly can be benchmarked, it's not the thing that is a performance concern - it's runtime performance that is a problem.

So - we need the start and end timers to be executed in Java, so that we're performance testing the execution of the code, not the compilation.

The simplest version of this would be to wrap the timer calls around the call to subprocess (since that's the part that spawns the actual invocation).

This comment has been minimized.

@patiences

patiences May 30, 2018

Contributor

Ah-ha.. I see. Thanks! Done

@@ -0,0 +1,31 @@
import sys
from os import path
sys.path.append( path.dirname( path.dirname( path.abspath(__file__) ) ) )

This comment has been minimized.

@freakboy3742

freakboy3742 May 30, 2018

Member

The spacing on this is a little odd.

This comment has been minimized.

@patiences

patiences May 30, 2018

Contributor

Oops, fixed. Done

return out
def runAndBenchAsJava(self, test_name, code):

This comment has been minimized.

@freakboy3742

freakboy3742 May 30, 2018

Member

I can see what you're doing here, but it's probably better served as 2 lines of code in the external wrapper invoking the code, rather than something on the utility class.

This comment has been minimized.

@patiences

patiences May 30, 2018

Contributor

Done

patiences added some commits May 30, 2018

@freakboy3742

👍

@freakboy3742 freakboy3742 merged commit a0bbc11 into pybee:master Jun 14, 2018

5 checks passed

beekeeper:0/beefore:javacheckstyle Java lint checks passed.
Details
beekeeper:0/beefore:pycodestyle Python lint checks passed.
Details
beekeeper:1/smoke-test Smoke build (Python 3.4) passed.
Details
beekeeper:2/full-test:py3.5 Python 3.5 tests passed.
Details
beekeeper:2/full-test:py3.6 Python 3.6 tests passed.
Details

@patiences patiences deleted the patiences:perf-benchmarking branch Jul 2, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment