-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pytest benchmark override with cpu and gpu times #21
base: main
Are you sure you want to change the base?
Pytest benchmark override with cpu and gpu times #21
Conversation
Pull Request Test Coverage Report for Build 1070994715Warning: This coverage report may be inaccurate.This pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.
Details
💛 - Coveralls |
…t_benchmark_override_with_cpu_and_gpu_times
benchmarks/cpu_gpu_times_wrapper.py
Outdated
def __setattr__(self, attr, value): | ||
setattr(self.wrapped_class, attr, value) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can safely remove __setattr__
.
Setting __setattr__
causes self.x = y
to always call __setattr__
and setting an attribute on self has to be done via self.__dict__["x"] = y
(which is painful and easy to forget to do). On top of this, its hard to imagine why someone doing self.x = y
on the wrapper object would want the attribute to be set on the wrapped class.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I need things to be set on the wrapped class because it is the one that does all the communications with the pytest benchmark modules. The wrapper class should not have any attributes besides the class being wrapped. Maybe there is an easier way to dinamically add this functionality on the classes ?
I think that the rapids version, which does not have the __setattr__
may have issues ( specially if we register it like a pytest
plugin) when manually setting attributes , like in Asier's code, which I would like to reuse in part.
Notice that if we were to register our wrapper to pytest
, when doing benchmark.groups = "x"
we wouldn't be setting the wrapped groups
attribute, which then gets pushed to the BenchmarkSession
.
Just linking to the source code of the pytest-benchmark class we're wrapping for those who are reading along and reviewing: https://github.com/ionelmc/pytest-benchmark/blob/master/src/pytest_benchmark/fixture.py |
compatible with pedantic
@MrRobot2211 What is the status of this PR? Should I review again? Or would you like to make more changes first? |
global variables. The structure had to be changed for this to work alternatively the structure could be kept if setup.cfg is changed to recognize the extra-module
…t_benchmark_override_with_cpu_and_gpu_times
@hodgestar I am missing here a tiny little code to generalize benchmarks to all operations, but I think it is a good idea that you take a look. I am not very comfortable about having promoted benchmark-tools to a module but did not see any better option. |
@hodgestar just a remainder. |
@@ -0,0 +1,158 @@ | |||
import warnings | |||
|
|||
# from pytest_benchmark.fixture import BenchmarkFixture |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Delete this commented out line.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left a couple of comments. Do you have example output from the benchmark runs? Perhaps we can add that as part of this PR somehow.
utility ``cupyx.time.repeat`` | ||
""" | ||
|
||
def __init__(self, wrapped_class, iterations=30, rounds=5, warmup_rounds=10): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think "wrapped_class" is really just a "wrapped_obj" here, so we should name it appropriately.
self.warmup_rounds = warmup_rounds | ||
|
||
def __getattr__(self, attr): | ||
# orig_attr = self.wrapped_class.__getattribute__(attr) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also delete this commented out code.
The idea here is to override pytest_benchmark fixture calls to retain most of the funtionality but use
cupy.benchmark.repeat
for timings instead ofpytest_benchmark
's internalrunner
object. This allows us to benchmark total, cpu and gpu times. I am striving for a fast solution, the sooner we can start benchmarking the better.Maybe, this could done more elegantly by using a cutom
timer
probably, but then one would run into trouble elsewhere in the code, and may de forced to create a wrapper calls for the oputput of the timer with an specialized
__add__
method and to change the library used for statistics. At that point registerin our ownpytest
plugin that links topytest_benchmark
maybe more suitable. That looks more like a long-term implementation for later on.To quick test that the functionality works you should install pytest_benchmark and do:
python src/benchmark_tools/benchmark.py
and you will be shown three different statistics: full time, cpu time and gpu time.
you can also build a json and see the separate statistics, this will allow it to play well with Asier's benchmarking code.
Why it works
In the
Fixture
class the method_add_stats
appends a new benchmark (bench_stats
of classMetadata
) into aBenchmarkSession
list.Each element of this list will then be taken one by one the
as_dict
method and dumped into a json file.No runtime checks are performed in pytest_benchmark on the size of this list.
Note that each bench_stats keepss the group and extra information from the
Fixture
wherein it was created.By copying the structur in
__call__
/pedantic
for our new funtion, iinstead of acreating abench_stats
with each call we will be creating three of them with different group attributes. Whose statistics will be calculated separately.Pending tasks
pytest_benchmark
functionality.I am open to add more things to the list as the review progresses
Other similar packages
benchmark
module inpytest-benchmark
it adds functionality for benchmarking memory use but it lackshost
anddevice
separate stats.unittest
style.