Add --output-warmup-metrics flag to cpu userbenchmark scripts#2604
Add --output-warmup-metrics flag to cpu userbenchmark scripts#2604murste01 wants to merge 1 commit intopytorch:mainfrom
Conversation
Adds a new `--output-warmup-metrics` flag which adds warmup metrics to benchmark result JSON files. This allows us to analyse warmup iterations and decide how many are enough.
|
An example with and without the new flag: Without Example command: Output tree:
With Example command: Output tree:
|
|
cc: @FindHao Thanks in advance! |
| def get_latencies( | ||
| func, device: str, nwarmup=WARMUP_ROUNDS, num_iter=BENCHMARK_ITERS | ||
| ) -> List[float]: | ||
| ) -> Tuple[List[float], List[float]]: |
There was a problem hiding this comment.
I'm concerned that this PR introduces too significant a change to the core APIs, not only for this line. As an alternative, can you consider adding an option to skip the warmup phase and use the actual run results as the 'warmup' results?
|
I've decided to drop this change in favour of using Thanks @FindHao. |
Adds a new
--output-warmup-metricsflag which adds warmup metrics to benchmark result JSON files. This allows us to analyse warmup iterations and decide how many are enough.