-
Notifications
You must be signed in to change notification settings - Fork 884
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add module for jmh benchmarks #55
Conversation
Can you explain in a little more detail what this is and why it's useful? |
It is very convenient tool to that allows measure and analyze performance. |
Yup, I understand that jmh is for benchmarking. Do you plan to add more benchmarks? The framework is nice, but I'm hesitant to add it if all we're going to have is a benchmark for a single method. |
8fcce65
to
18e07de
Compare
I am not sure over the value of this benchmark. An absolute value does not give any indication as it is machine and runtime-context dependant. One would need to add some form of baseline to have a comparison value. Also, what does the flight recorder profile help here? Again, without a baseline, the results are dubious. |
@raphw, the benchmark is like a unit test for performance. If benchmark is missing, there is no way to figure out if the performance was broken or not.
Flight recorder is for investigation purposes. Does that answer your questions? |
I very much understand the intention. But the test can only be run in two iterations, one before and one after a change is applied. These benchmarks need then to be run on the same machine under the exact same setup. Ideally, one would have some sort of baseline benchmark to capture this variation as JMH benchmarks should rather be interpreted by their ratio then their absolute value. I see that this pull request has a benefit but I would rather see such a baseline added to it. |
Do you mean I should add a file with |
No, not at all. Such a machine-dependant result would make no sense as a comparison value. A baseline benchmark is measured to capture the benchmark overhead to set it into proportion as the raw number of the benchmark does not mean a lot. For example, it is difficult to measure the performance a micro-operation such as adding an element to a set. To put the result into proportion I would need to add another benchmark that captures the noise of the machine to create a baseline number. If the result of the set operation yields a result that is only insignifficantly different, it means that the benchmark is probably not capturing the intended event: @Benchmark
Object baseline() {
return new Object();
}
@Benchmark
Object actualBenchmark(){
Set<Object> set = new HashSet<>();
set.add(new Object());
return set;
} I did however not yet find a good baseline for your proposed benchmark. Can you think of something? |
Finally I got you. Here's the baseline for @Benchmark
public Object newBeans() {
return new Beans();
} Anything else? |
That was too obvious. Good enough! If you add it, I merge the benchmark. |
I've rebased the PR.
|
No description provided.