Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do more per VM invocation #32

Open
GoogleCodeExporter opened this issue May 19, 2015 · 10 comments
Open

Do more per VM invocation #32

GoogleCodeExporter opened this issue May 19, 2015 · 10 comments

Comments

@GoogleCodeExporter
Copy link

Suppose I'm testing 4 x 3 different parameter values against 5 different 
benchmarks (different time- methods in the same class) on 2 vms.  
Currently, to get one measurement each, we'll run 4x3x5x2=120 vm 
invocations.  I think 10 would be enough -- vms times benchmarks, and let 
each run handle all 4 x 3 parameter combinations for that (vm,benchmark) 
pair.

The problem with the way it is today is that hotspot can optimize away 
whole swaths of implementation code that doesn't happen to get exercised by 
the *one* scenario we run it with. By warming up all 12 of these benchmark 
instances, it should have to compile to something more closely resembling 
real life (maybe).  And with luck, we can avoid the expense of repeating 
the warmup period 12 times over.

After warming up all the different scenarios and then starting to do trials 
of one of them, I'm not sure if we need to worry about hotspot deciding to 
*re*compile based on the new favorite scenario.  If that happens, maybe it 
makes sense for us to round-robin through the scenarios as we go...... 
we'll see.

I'm also not sure how concerned we need to be that the order the scenarios 
are timed in can unduly affect the results. It could be that for each 
"redundant" measurement we take, we vary up the order (e.g. we rotate it?) 
in order to wash that out.  Or maybe there's no problem with this; I dunno.

Original issue reported on code.google.com by kevinb@google.com on 22 Jan 2010 at 10:53

@GoogleCodeExporter
Copy link
Author

note that this is a correctness issue -- caliper is currently reporting totally 
bogus 
results.

Original comment by kevinb@google.com on 7 Jun 2010 at 5:48

  • Changed title: Do more per VM invocation
  • Added labels: Milestone-0.5

@GoogleCodeExporter
Copy link
Author

Original comment by kevinb@google.com on 14 Jan 2011 at 11:09

  • Added labels: Milestone-1.0

@GoogleCodeExporter
Copy link
Author

I can definitely tell that the order of warming up will affect HotSpot 
statistics and if different choices are made, the results will be different. 
This is the case we have in JUnitBenchmarks project -- the order of JUnit tests 
turned into benchmarks does affect the outcome (I only noticed this when 
compared the results against the same test executed in Caliper in separate VMs).

Original comment by dawid.weiss@gmail.com on 4 Mar 2011 at 4:43

@GoogleCodeExporter
Copy link
Author

Original comment by kevinb@google.com on 19 Mar 2011 at 2:13

  • Added labels: Milestone-Post-1.0

@GoogleCodeExporter
Copy link
Author

Original comment by kevinb@google.com on 19 Mar 2011 at 3:06

  • Added labels: Type-Enhancement

@GoogleCodeExporter
Copy link
Author

Original comment by kevinb@google.com on 8 Feb 2012 at 9:49

  • Added labels: Component-Runner

@GoogleCodeExporter
Copy link
Author

Original comment by kevinb@google.com on 1 Nov 2012 at 8:32

@GoogleCodeExporter
Copy link
Author

+martin is expressing some concern about this too.

Original comment by kevinb@google.com on 11 Oct 2013 at 9:40

@GoogleCodeExporter
Copy link
Author

I have also seen benchmark results highly dependent on the order of warmup - 
JIT optimizes for the profile collected during first method warmup.  That said, 
I would in general prefer to have my methods run in the same VM, to make 
execution less artificial.  So I support this change, but y'all had better vary 
warmup order.

Original comment by marti...@google.com on 11 Oct 2013 at 9:48

@GoogleCodeExporter
Copy link
Author

I remain skeptical that such a change would make a benchmark less artificial.  
They would just be artificial in a a different way that is less predictable.  
We know that microbenchmarks are somewhat contrived anyway, so half-efforts 
toward making them "realistic" feel a little futile.

There's also a fair bit of work to be done to figure out how to make this work 
since the relatively simple estimation work that we do to guess at reps for 
microbenchmark warmup would need to be replaced.

That said, I see no reason to have it as an option if we can devise a strategy 
to make it work.

I should also mention that if our primary concern is total run execution time, 
the far easier target than this warmup business is just running instruments 
that aren't sensitive to resource constraints (e.g.: the allocation instrument) 
concurrently.

Original comment by gak@google.com on 12 Oct 2013 at 2:37

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants