-
Couldn't load subscription status.
- Fork 10.6k
Fix SR-4598 Add option to run subset of benchmarks matching a prefix #8975
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Looking at this one now. |
|
@palimondo at some point, we should really add unit tests for the driver. But we can do that later. It would give more confidence around these sorts of changes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you make these changes and send back to me for a quick look over?
benchmark/scripts/Benchmark_Driver
Outdated
| parent_parser = argparse.ArgumentParser(add_help=False) | ||
| benchmarks_group = parent_parser.add_mutually_exclusive_group() | ||
| benchmarks_group.add_argument( | ||
| 'benchmark', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is a list, can you change the destination to benchmarks? IIRC the argument is "dest".
So the user will still:
--benchmark foo --benchmark bar
But programmatically we will see:
args.benchmarks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Benchmarks are positional arguments - just a list of names at the end of the line:
mondo$ $BUILD/bin/Benchmark_Driver run -o O MonteCarloE AngryPhonebook
1,AngryPhonebook,1,13177,13177,13177,0,13177,11046912
124,MonteCarloE,1,50568,50568,50568,0,50568,18423808
Totals,2,63745,63745,63745,0,63745,29470720
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I misunderstood. You just wanted to rename the internal argument name. I was talking about the fact that this argument has no user facing part.
benchmark/scripts/Benchmark_Driver
Outdated
| return filter(lambda name: name.startswith(args.filter), tests) | ||
| if not args.benchmark: | ||
| return tests | ||
| return sorted(list(set(tests).intersection(set(args.benchmark)))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will silently fail if someone passes in a non-matching benchmark. We should either emit a warning or fail in such a case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It doesn't fail silently. It executes no tests:
$ $BUILD/bin/Benchmark_Driver run -o O --iterations 2 Malarky
# TEST SAMPLES MIN(μs) MAX(μs) MEAN(μs) SD(μs) MEDIAN(μs) MAX_RSS(B)
$ $BUILD/bin/Benchmark_Driver run -o O --iterations 2 MonteCarloE Malarky
# TEST SAMPLES MIN(μs) MAX(μs) MEAN(μs) SD(μs) MEDIAN(μs) MAX_RSS(B)
124 MonteCarloE 2 36580 37602 37091 722 37602 24047616
Totals 1 36580 37602 37091 722 37602 24047616
Logging results to: /Users/mondo/Desktop/logs-test/refactor-compare-perf/Benchmark_O-20170425012616-f3b33f06d4.log
| help='optimization levels to use (default: O Onone Ounchecked)', | ||
| default=['O', 'Onone', 'Ounchecked']) | ||
| submit_parser.add_argument( | ||
| '-i', '--iterations', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am going to ignore this for now, but we should really change this argument to --samples. In terms of the lower level benchmark driver, iterations is proportional to the length of a sample. This is really changing the number of individual samples that we are running.
|
I think I answered your remarks. Is there any other change you want me to make? |
|
Here's the current usage help, to illustrate how the |
Fixed pyton-lint warnings
|
Lets get this in now. In a separate PR can you add a hard error if a selected benchmark is specified but is not available? The reason that I called such a thing a silent failure is because the user is specifically stating that they want to see a specific benchmark run. If the benchmark doesn't run, from an API perspective, we should fail early and quickly. That being said, I am fine with doing this in a later commit. |
|
@swift-ci smoke benchmark |
Build comment file:Optimized (O) Regression (0)Improvement (0)No Changes (268)
Regression (1)
Improvement (1)
No Changes (266)
|
|
@swift-ci smoke test and merge |
1 similar comment
|
@swift-ci smoke test and merge |
Resolves SR-4598.
Broken out of #8793.
@gottesmm Please review