Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add module for jmh benchmarks #55

Merged
merged 1 commit into from
Feb 5, 2016
Merged

Add module for jmh benchmarks #55

merged 1 commit into from
Feb 5, 2016

Conversation

vlsi
Copy link
Contributor

@vlsi vlsi commented Nov 14, 2015

No description provided.

@sameb
Copy link
Contributor

sameb commented Nov 14, 2015

Can you explain in a little more detail what this is and why it's useful?

@vlsi
Copy link
Contributor Author

vlsi commented Nov 14, 2015

It is very convenient tool to that allows measure and analyze performance.
For instance, simple java -jar benchmarks.jar -t 8 allowed me to capture the performance of Beans.newInstance(Beans.class) under 8 concurrent threads (see #53)

@sameb
Copy link
Contributor

sameb commented Nov 14, 2015

Yup, I understand that jmh is for benchmarking.

Do you plan to add more benchmarks?

The framework is nice, but I'm hesitant to add it if all we're going to have is a benchmark for a single method.

@vlsi vlsi mentioned this pull request Nov 14, 2015
@vlsi
Copy link
Contributor Author

vlsi commented Nov 14, 2015

I hope the suite will grow over time.
Even a single benchmark helps validating lots of PRs: #50, #51, #53, #54

@raphw
Copy link
Member

raphw commented Feb 4, 2016

I am not sure over the value of this benchmark. An absolute value does not give any indication as it is machine and runtime-context dependant. One would need to add some form of baseline to have a comparison value. Also, what does the flight recorder profile help here? Again, without a baseline, the results are dubious.

@vlsi
Copy link
Contributor Author

vlsi commented Feb 4, 2016

@raphw, the benchmark is like a unit test for performance.
For instance, if one adds a feature, the benchmark can be used to validate "before" and "after" performance.

If benchmark is missing, there is no way to figure out if the performance was broken or not.

Also, what does the flight recorder profile help here?

Flight recorder is for investigation purposes.
For instance, when you research "before and after", flight recorder is very helpful to identify "excessive memory allocation" kind of problems.

Does that answer your questions?

@raphw
Copy link
Member

raphw commented Feb 4, 2016

I very much understand the intention. But the test can only be run in two iterations, one before and one after a change is applied. These benchmarks need then to be run on the same machine under the exact same setup. Ideally, one would have some sort of baseline benchmark to capture this variation as JMH benchmarks should rather be interpreted by their ratio then their absolute value. I see that this pull request has a benefit but I would rather see such a baseline added to it.

@vlsi
Copy link
Contributor Author

vlsi commented Feb 4, 2016

Do you mean I should add a file with java -jar benchmarks.jar output to the PR?
I've never seen that in the wild.

@raphw
Copy link
Member

raphw commented Feb 5, 2016

No, not at all. Such a machine-dependant result would make no sense as a comparison value. A baseline benchmark is measured to capture the benchmark overhead to set it into proportion as the raw number of the benchmark does not mean a lot. For example, it is difficult to measure the performance a micro-operation such as adding an element to a set. To put the result into proportion I would need to add another benchmark that captures the noise of the machine to create a baseline number. If the result of the set operation yields a result that is only insignifficantly different, it means that the benchmark is probably not capturing the intended event:

@Benchmark
Object baseline() {
  return new Object();
}

@Benchmark
Object actualBenchmark(){
  Set<Object> set = new HashSet<>();
  set.add(new Object());
  return set;
}

I did however not yet find a good baseline for your proposed benchmark. Can you think of something?

@vlsi
Copy link
Contributor Author

vlsi commented Feb 5, 2016

Finally I got you.

Here's the baseline for BeansBenchmark

    @Benchmark
    public Object newBeans() {
        return new Beans();
    }

Anything else?

@raphw
Copy link
Member

raphw commented Feb 5, 2016

That was too obvious. Good enough! If you add it, I merge the benchmark.

@vlsi
Copy link
Contributor Author

vlsi commented Feb 5, 2016

I've rebased the PR.
Here's what I get out of java -jar benchmarks.jar -prof gc

Benchmark                                        Mode  Cnt     Score     Error   Units
BeansBenchmark.baseline                          avgt    5     3,280 ±   0,107   ns/op
BeansBenchmark.baseline:·gc.alloc.rate.norm      avgt    5    16,000 ±   0,001    B/op
BeansBenchmark.newInstance                       avgt    5   138,938 ±   8,258   ns/op
BeansBenchmark.newInstance:·gc.alloc.rate.norm   avgt    5   264,000 ±   0,001    B/op

raphw pushed a commit that referenced this pull request Feb 5, 2016
Add module for jmh benchmarks
@raphw raphw merged commit 62140c4 into cglib:master Feb 5, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants