Add micro-benchmarks to JMH #159
Merged
Conversation
Create a module with runnable jmh-benchmarks for the togglz project with a simple benchmark that compares the overhead of the togglz library vs a simple boolean.
Awesome! Thank you soooo much for contributing this. Just a side note. I think I'll rename the module from Again: Thanks so much for this... |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
There is a README.md file that describes how to execute these benchmarks.
I don't have then running automatically as part of the build because they take some time to run - a few minutes each one.
You can either package them up and run them all at once or you can run them individually in an IDE. Running them individually in an IDE is pretty cool because in your IDE (at least in IntelliJ) you can open the togglz project, make changes to something, and then run just the one benchmark you are interested in to see if your new tweaks made things any better or not.
I added a baseline one that compares the performance of the togglz vs a boolean if flag, another one that looks at how it performs when you use an activation strategy, and another one that tests how it performs when you use a ScriptEngine.