Skip to content

[SR-4669] Add a Benchmark_Driver --rerun N option #47246

@atrick

Description

@atrick
Previous ID SR-4669
Radar rdar://problem/32029925
Original Reporter @atrick
Type New Feature
Status Closed
Resolution Done

Attachment: Download

Additional Detail from JIRA
Votes 1
Component/s Source Tooling
Labels New Feature, StarterBug
Assignee @palimondo
Priority Medium

md5: 904f74a88b2c2fd58519e6a6f68673ba

is duplicated by:

  • SR-4814 Smoke benchmark optimization

Issue Description:

This feature would work as follows;

1. Run all the benchmarks as usual, according to all the other options just like 'Benchmark_Driver run'

2. Run the compare script just like Benchmark_Driver compare.

3. From the output of the comparison scrape a list of tests from the significant regressions and improvements.

4. Rerun just that subset for N iterations. The user would normally want N to be much higher than the initial iterations. Say 20 vs. 3. The output of each rerun should simply be appended to the previous output. That's how the compare_script was originally designed to work.

The driver has almost all of the functionality to do this already. The only thing missing is parsing the compare_script's output.

An alternative would be to make the compare_script a python module that the driver can import.

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions