Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial Google Benchmark results conversion #275

Merged
merged 2 commits into from
Sep 3, 2020
Merged

Conversation

cottsay
Copy link
Contributor

@cottsay cottsay commented Aug 31, 2020

This change connects the Google Benchmark tests to the Jenkins Benchmark plugin to display the results and warn developers when thresholds are exceeded.

Here is a temporary job demonstrating this functionality for review purposes: http://build.ros2.org/view/Rci/job/Rci__benchmark_ubuntu_focal_amd64/BenchmarkTable/

Signed-off-by: Scott K Logan <logans@cottsay.net>
@cottsay cottsay added enhancement New feature or request in review Waiting for review (Kanban column) labels Aug 31, 2020
@cottsay cottsay self-assigned this Aug 31, 2020

if not out_data[group_name]:
print(
'WARNING: The perfromance test results file contained no results',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cottsay nit:

Suggested change
'WARNING: The perfromance test results file contained no results',
'WARNING: The performance test results file contained no results',

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, should this raise instead and leave the caller to handle the error? I believe json serialization and deserialization can also raise.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

... should this raise instead ...?

I think it's reasonable to allow the benchmark run to generate no results, possibly due to arguments that end up skipping all of the tests. As far as I can tell, Google Benchmark seems to generate a completely empty file when this happens (which I'm now handling in 25a94a8), but I could see it being possible to generate valid JSON that doesn't contain any results we're interested in.

I'll probably need to make a follow-up PR after this one gets merged to handle more than just the "iteration" type of benchmarks. Right now, the "aggregate" types are causing parse problems. After that change, we'd specifically be converting only specific metrics, making it a realistic possibility that we'd encounter valid JSON but no convertible results, despite there not being any problems.

Signed-off-by: Scott K Logan <logans@cottsay.net>
@cottsay cottsay requested a review from hidmic September 1, 2020 18:28
Copy link
Contributor

@wjwwood wjwwood left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems reasonable to me, with @hidmic's concern addressed. I like the json file for configuring limits.

@cottsay
Copy link
Contributor Author

cottsay commented Sep 3, 2020

Thanks for the feedback, guys. I'm going to merge this PR so that I can make the follow-up changes I mentioned in the comment I made earlier.

@cottsay cottsay merged commit ed2f7b3 into master Sep 3, 2020
@delete-merged-branch delete-merged-branch bot deleted the benchmark_results branch September 3, 2020 00:59
ahcorde pushed a commit that referenced this pull request Oct 2, 2020
Signed-off-by: Scott K Logan <logans@cottsay.net>
ahcorde pushed a commit that referenced this pull request Oct 6, 2020
Signed-off-by: Scott K Logan <logans@cottsay.net>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request in review Waiting for review (Kanban column)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants