New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

First stab on dropwizard-benchmarks #990

Merged
merged 3 commits into from Apr 15, 2015

Conversation

Projects
None yet
4 participants
@arteam
Member

arteam commented Apr 11, 2015

Add benchmarks for Duration and Size

The benchmarks measure an average time to parse a string to Duration and Size representations.

arteam added some commits Apr 11, 2015

Setup JMH
Add the harness and annotation-processor to run benchmarks
from the main method.
Add a benchmark for Duration
The benchmark measures an average time to parse a string
to a Duration representation.
Add a benchmark for Size
The benchmark measures an average time to parse a string
to a Size representation.
@jplock

This comment has been minimized.

Show comment
Hide comment
@jplock

jplock Apr 13, 2015

Member

This looks good to me. @carlo-rtr did you want the benchmarks to focus more on benchmarking the entire system (Jetty, Jersey, etc.), or is this level of granularity ok?

Member

jplock commented Apr 13, 2015

This looks good to me. @carlo-rtr did you want the benchmarks to focus more on benchmarking the entire system (Jetty, Jersey, etc.), or is this level of granularity ok?

@carlo-rtr

This comment has been minimized.

Show comment
Hide comment
@carlo-rtr

carlo-rtr Apr 13, 2015

Member

I was thinking more macro, like the empower benchmarks. I've updated the issue: #963

With that said, it seems like this would be useful too. I have never used this before, so I'm going to read up then opine.

Member

carlo-rtr commented Apr 13, 2015

I was thinking more macro, like the empower benchmarks. I've updated the issue: #963

With that said, it seems like this would be useful too. I have never used this before, so I'm going to read up then opine.

@arteam

This comment has been minimized.

Show comment
Hide comment
@arteam

arteam Apr 13, 2015

Member

My 2 cents:

  • The situation with micro-benchmarks is a very similar to unit testing. There is a little profit from one lone benchmark, but when the all code is covered by benchmarks, it's far easier to find the root of a perfomance problem early. When a developer checks-in the new code, we can run micro-benchmarks and check regression. If there is any, it's a sign of a possible performance problem.
  • It's a tool for the maintainers to argue about perfomance with numbers. For example, if a maintainer thinks that a pull request decreases performance, he can consolidate his opinion with numbers.
  • I don't know where it will lead us, but it's a tool that can improve our awareness in making decisions. More data is better.
Member

arteam commented Apr 13, 2015

My 2 cents:

  • The situation with micro-benchmarks is a very similar to unit testing. There is a little profit from one lone benchmark, but when the all code is covered by benchmarks, it's far easier to find the root of a perfomance problem early. When a developer checks-in the new code, we can run micro-benchmarks and check regression. If there is any, it's a sign of a possible performance problem.
  • It's a tool for the maintainers to argue about perfomance with numbers. For example, if a maintainer thinks that a pull request decreases performance, he can consolidate his opinion with numbers.
  • I don't know where it will lead us, but it's a tool that can improve our awareness in making decisions. More data is better.
@jplock

This comment has been minimized.

Show comment
Hide comment
@jplock

jplock Apr 15, 2015

Member

I think this is a good first step. I'll merge it.

Member

jplock commented Apr 15, 2015

I think this is a good first step. I'll merge it.

jplock added a commit that referenced this pull request Apr 15, 2015

Merge pull request #990 from arteam/first_benchmarks
First stab on dropwizard-benchmarks

@jplock jplock merged commit aa50382 into dropwizard:master Apr 15, 2015

@jplock jplock added this to the 0.9.0 milestone Apr 15, 2015

@carlo-rtr

This comment has been minimized.

Show comment
Hide comment
@carlo-rtr

carlo-rtr Apr 15, 2015

Member

Do you think we should have an abstract class with the main method? The only thing changing is the name, we could have an abstract method for getBenchMarkName()

Member

carlo-rtr commented Apr 15, 2015

Do you think we should have an abstract class with the main method? The only thing changing is the name, we could have an abstract method for getBenchMarkName()

@arteam arteam added the task label Apr 26, 2015

@arteam arteam deleted the arteam:first_benchmarks branch Jan 24, 2016

@ryankennedy

This comment has been minimized.

Show comment
Hide comment
@ryankennedy

ryankennedy Feb 6, 2016

Member

This is really cool. Any reason not to shade the final JAR and make the Main-Class be org.openjdk.jmh.Main? I just pulled these changes down and tried to run them but the resulting JAR doesn't have a Main-Class attribute in the manifest and even specifying the DurationBenchmark class results in a java.lang.NoClassDefFoundError: io/dropwizard/util/Duration.

The instructions for generating a project using mvn archetype:generate generate a POM with shading and the Main-Class set up like I mentioned above. I'd be happy to go in and make the changes, but I first wanted to make sure @arteam hadn't skipped those for good reason.

Member

ryankennedy commented Feb 6, 2016

This is really cool. Any reason not to shade the final JAR and make the Main-Class be org.openjdk.jmh.Main? I just pulled these changes down and tried to run them but the resulting JAR doesn't have a Main-Class attribute in the manifest and even specifying the DurationBenchmark class results in a java.lang.NoClassDefFoundError: io/dropwizard/util/Duration.

The instructions for generating a project using mvn archetype:generate generate a POM with shading and the Main-Class set up like I mentioned above. I'd be happy to go in and make the changes, but I first wanted to make sure @arteam hadn't skipped those for good reason.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment