Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add some microbenchmarks for Coders and DoFnReflector #553

Closed
wants to merge 8 commits into from

Conversation

bjchambers
Copy link
Contributor

Be sure to do all of the following to help us incorporate your contribution
quickly and easily:

  • [*] Make sure the PR title is formatted like:
    [BEAM-<Jira issue #>] Description of pull request
  • [*] Make sure tests pass via mvn clean verify. (Even better, enable
    Travis-CI on your fork and ensure the whole test matrix passes).
  • [*] Replace <Jira issue #> in the title with the actual Jira issue
    number, if there is one.
  • [*] If this contribution is large, please file an Apache
    Individual Contributor License Agreement.

@bjchambers
Copy link
Contributor Author

R: @davorbonaci @kennknowles

Here is a strawman PR to introduce some microbenchmarks for pieces of the SDK potentially in the critical path. This focuses on the cost of calling processElement in a DoFn using the various mechanisms available.

@davorbonaci
Copy link
Member

@dhalperi, this is important process-wise and how we manage changes in the benchmarks.

@bjchambers bjchambers changed the title Add initial microbenchmark for DoFnReflector Add some microbenchmarks for Coders and DoFnReflector Jun 29, 2016
@bjchambers
Copy link
Contributor Author

Added some microbenchmarks for Coders and cleaned up the build issues.


3. run benchmark harness:

java -jar target/microbenchmarks.jar
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you get legitimate results from an invocation such as mvn exec:java -DmainClass=<whatever> or is there something wrong with that? Just to see if this might be a one liner.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think so, based on reading on Stackoverflow. My goal here is to get the microbenchmarks added for datapoints in other PRs. Let's look into how we can get JMH benchmarks into some database separately, possibly via mvn, etc.

@kennknowles
Copy link
Member

Do the benchmarks depend on package-private stuff? I'd prefer to avoid the split package, meaning that we would put this stuff somewhere under a new benchmarks namespace or some such rather than the namespaces from the core SDK.

@bjchambers
Copy link
Contributor Author

They did depend on package-private stuff, but I made it public and moved them into their own microbenchmarks package. I could conceive of wanting to use package-private stuff in the same way tests do, but we can always do so by adding a thin wrapper to expose methods to benchmarks.

@kennknowles
Copy link
Member

LGTM. And since there is no backwards-compatibility necessities I am very comfortable merging and if another committer has a better idea of how to organize things it is easy to adjust. Self merge any time, or feel free to solicit other opinions.

import java.util.Arrays;

/**
* Benchmarks for AvroCoder.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

link this

@dhalperi
Copy link
Contributor

dhalperi commented Jul 1, 2016

Looks like there's no tests run under normal situation. How do we prevent bitrot?

@kennknowles
Copy link
Member

That's a great point about bitrot. It should suffice to run the benchmarks on a tiny amount of input to ensure that they build & don't crash.

@bjchambers
Copy link
Contributor Author

I've added links. I plan to submit as is (since these benchmarks add value). The most likely bitrot is prevented by getting them in and compiling regularly. We should look at either (1) adding explicit unit tests for them -- a bit odd since benchmarks are already test-like or (2) including the benchmark runs in some nightly runs and putting them on a dashboard somewhere. I think (2) is the most preferable solution to preventing bitrot.

@asfgit asfgit closed this in 88db3be Jul 2, 2016
@bjchambers bjchambers deleted the microbenchmarks branch November 21, 2016 21:41
johnjcasey pushed a commit to johnjcasey/beam that referenced this pull request Feb 8, 2023
pl04351820 pushed a commit to pl04351820/beam that referenced this pull request Dec 20, 2023
* chore(python): use black==22.3.0

Source-Link: googleapis/synthtool@6fab84a
Post-Processor: gcr.io/cloud-devrel-public-resources/owlbot-python:latest@sha256:7cffbc10910c3ab1b852c05114a08d374c195a81cdec1d4a67a1d129331d0bfe

* ci: update black version in owlbot.py

* ci: lint

* 🦉 Updates from OwlBot post-processor

See https://github.com/googleapis/repo-automation-bots/blob/main/packages/owl-bot/README.md

* ci: lint

Co-authored-by: Owl Bot <gcf-owl-bot[bot]@users.noreply.github.com>
Co-authored-by: Anthonios Partheniou <partheniou@google.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants