-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial Google Benchmark results conversion #275
Conversation
Signed-off-by: Scott K Logan <logans@cottsay.net>
ament_cmake_google_benchmark/ament_cmake_google_benchmark/__init__.py
Outdated
Show resolved
Hide resolved
|
||
if not out_data[group_name]: | ||
print( | ||
'WARNING: The perfromance test results file contained no results', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@cottsay nit:
'WARNING: The perfromance test results file contained no results', | |
'WARNING: The performance test results file contained no results', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, should this raise
instead and leave the caller to handle the error? I believe json serialization and deserialization can also raise.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
... should this
raise
instead ...?
I think it's reasonable to allow the benchmark run to generate no results, possibly due to arguments that end up skipping all of the tests. As far as I can tell, Google Benchmark seems to generate a completely empty file when this happens (which I'm now handling in 25a94a8), but I could see it being possible to generate valid JSON that doesn't contain any results we're interested in.
I'll probably need to make a follow-up PR after this one gets merged to handle more than just the "iteration" type of benchmarks. Right now, the "aggregate" types are causing parse problems. After that change, we'd specifically be converting only specific metrics, making it a realistic possibility that we'd encounter valid JSON but no convertible results, despite there not being any problems.
ament_cmake_google_benchmark/ament_cmake_google_benchmark/__init__.py
Outdated
Show resolved
Hide resolved
Signed-off-by: Scott K Logan <logans@cottsay.net>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems reasonable to me, with @hidmic's concern addressed. I like the json file for configuring limits.
Thanks for the feedback, guys. I'm going to merge this PR so that I can make the follow-up changes I mentioned in the comment I made earlier. |
Signed-off-by: Scott K Logan <logans@cottsay.net>
Signed-off-by: Scott K Logan <logans@cottsay.net>
This change connects the Google Benchmark tests to the Jenkins Benchmark plugin to display the results and warn developers when thresholds are exceeded.
Here is a temporary job demonstrating this functionality for review purposes: http://build.ros2.org/view/Rci/job/Rci__benchmark_ubuntu_focal_amd64/BenchmarkTable/