-
Notifications
You must be signed in to change notification settings - Fork 177
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comparative Benchmarking #478
Comments
Sorry for closing and reopening this. It is possible with a combination of However this suffers (at least on the 0.2 version) from several drawbacks:
Besides fixing the first point (or did I do something wrong?) would it be possible to have a specialized class for comparative benchmarks? That class would be displayed like a parametrized benchmark but instead of using the parametrization as parameters use the methods? I don't know if that's within the scope of |
The "plotting options" thing is a bug (probably best discussed in a separate issue ticket). I'm not completely sure from the above description what is the problem with using parameterized benchmarks for what you are trying to do. Could you post some example code of what you are currently doing and what you'd like to do to gist.github.com, to clarify this? If you are trying to have each method in a benchmark class define a parameter value and an associated benchmark, I think that can be achieved generically with a class decorator or a metaclass --- I'm not sure if it would be a good idea to include such wrapper in asv itself, as this introduces multiple ways to declare the same thing. |
@pv I haven't used class decorators/metaclasses to that extend yet, so I don't know how I would go about implementing this. But currently I have some implementation like this: https://gist.github.com/MSeifert04/d2d4013093362c9e71c60b16ef53e355. For example I have different functions, say It hope it's not too much of a mess and I also think that this is not really so general as to be useful for lots of people. 😄 |
I'm currently working on a benchmarking project where I want to benchmark different implementations of some function. It's not really hard to write the benchmarks but it's not so easy (impossible?) to display them as comparative benchmarks in one graph.
I know that it's possible to parametrize the benchmarks but these have weird names (they use the full name of the function and if the function is defined in the benchmarking file it's not displayed at all) and obviously different implementations of the same functionality might differ (argument names, argument order, ...) in a way that makes it really ugly to parametrize the test.
Would it be possible to either:
The text was updated successfully, but these errors were encountered: