-
Notifications
You must be signed in to change notification settings - Fork 100
Updated Benchmarking #198
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updated Benchmarking #198
Conversation
| java -jar target/benchmark.jar | ||
| git clone https://github.com/aws/aws-xray-sdk-java.git | ||
| cd aws-xray-sdk-java | ||
| ./gradlew jmhJar |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about just ./gradlew jmh?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I tried this, and it gave lots of messy output where the gradle output of [executing 1m 17s] would interfere with the JMH output, and the JMH output would somehow spill over and appear when running ./gradlew build afterwards. I can include both options though in case it's just a me problem.
| If you wish to run a specific benchmark, please run the following command: | ||
| ``` | ||
| java -jar target/benchmarks.jar <Benchmark_name_here> | ||
| java -jar aws-xray-recorder-sdk-benchmark-<VERSION>-jmh.jar <Benchmark_name_here> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Though it means forwarding a property into the Gradle config like here
https://github.com/line/armeria/blob/master/benchmarks/build.gradle#L32
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that's only if we were using the jmh task with gradle, when executing the jar directly I verified that just passing the fully qualified name of a benchmark class or test works.
| import org.openjdk.jmh.runner.options.OptionsBuilder; | ||
|
|
||
| @BenchmarkMode(Mode.All) | ||
| @BenchmarkMode({Mode.Throughput, Mode.SampleTime}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, was thinking about this too!
| @Warmup(iterations = 20) | ||
| @Measurement(iterations = 20) | ||
| @OutputTimeUnit(TimeUnit.NANOSECONDS) | ||
| public Segment constructSegmentBenchmark(BenchmarkState state) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While we're cleaning, can we look for benchmarks using Level.Invocation and remove them? They're not real benchmarks for us since it's designed for much longer-running invocations (guessing when the benchmarks were added someone didn't know about the dragons).
http://javadox.com/org.openjdk.jmh/jmh-core/1.7/org/openjdk/jmh/annotations/Level.html#Invocation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can definitely remove them, looks like Level.Iteration is more appropriate. Don't want to mess with any dragons.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, it turns out some of the tests need a per-invocation setup, e.g. to put a segment in context then end it, or clear segment context before beginning a new segment.
So we can either keep the necessary invocation rules, or scrap the benchmarks that require them since we already have full lifecycle benchmarks (e.g. only have beginEndSegment as opposed to having beginSegment and endSegment benchmarks). I'm in favor of the latter since some of the benchmarks can be quite redundant.
What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussed offline, will remove the non-lifecycle Recorder tests since it's very possible that they are not accurate measurements. Using a workaround with OperationsPerInvocation for entity benchmarks.
Co-authored-by: Anuraag Agrawal <anuraaga@gmail.com>
…ray-sdk-java into gradle-updates
| } | ||
|
|
||
| @TearDown(Level.Invocation) | ||
| public void doTearDown() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think just running the setup on every iteration would be simpler than this, and avoids needing to add public API setCause
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean running setup every iteration or invocation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry meant invocation (both start with i and end with n!)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed!
Description of changes:
Unified the benchmarks to have similar configurations, which are:
jmhtask was set to)Updated the readme instructions to run benchmarks with Gradle instead of Maven. Also updated Gradle publishing to exclude unnecessary
.modulefiles from being published.These changes allow us to run benchmarks, including the newer trace ID ones, in 20 minutes as opposed to 70+ minutes that it took before, which will make releases a little easier in the future.
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.