-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add actual test time to xUnit result files #269
Comments
Please consider to contribute a pull request to provide the actual test run time in the result. |
What would you suggest with respect to means of measurement? Simple elapsed time from system clock from start to stop of python subprocess, or something like with The first would simply work on all platforms, but could be inaccurate if the CI runner is heavily loaded with many parallel jobs competing for CPU time via the OS scheduler. The second could would account for that, but I'm not sure how it could be done for non-unix platforms. |
Measuring the delta between the start and end time seems like the only option which makes sense. Imo the CPU usage has nothing to do with the reported test time. I test could easily sleep for an hour which wouln't show up in the test time. Also other tools like CTest will report the time between start and end. |
Side note, it looks like the Would there be an appropriate direction for appending to previous |
Atm the logic generating these result files has no knowledge if the test invocation is a first time run or a re-rerun based on |
Presumably, this code could just read the same
And then just append or increment the test time and tests/failers from the current measurements. |
An existing result file could be from an independent |
I was going to say that for xunit, I think it makes sense for aggregate test time over the number of tests. If you wanted to reset the rest results, then I think one would want to explicitly clean the test results for that package/workspace. But guess I'm not confident if a repeated test should be considered apart of the same test suite that the junit mentions. http://windyroad.com.au/dl/Open%20Source/JUnit.xsd Should intermittent test results when using |
I don't think that this is the way how any existing testing frameworks do it. I would also find it extremely confusing to find a test result which shows that the test ran for 100s when the configured timeout is e.g. 30s.
I guess it is up to the underlying tool (like |
Fixes ament#269 Signed-off-by: ruffsl <roxfoxpox@gmail.com>
* Add actual test time to xUnit result files Fixes #269 Signed-off-by: ruffsl <roxfoxpox@gmail.com> * Report test_time even with skipped test Signed-off-by: ruffsl <roxfoxpox@gmail.com> * Set time attribute for testcase element Signed-off-by: ruffsl <roxfoxpox@gmail.com>
When generating
.xml
xunit result files for tests, theament_cmake_test
python module records number of seconds the test run took as a constant zero, and thus does not accurately account of the real world duration of time the test took for the runner to complete:ament_cmake/ament_cmake_test/ament_cmake_test/__init__.py
Lines 325 to 330 in cba4e09
This inaccuracy can impede the discoverability of performance regressions by falsifying test statistics that are consumed by test result aggregation tools, e.g. continuous integration frameworks. Another second order effect is the misallocation of CI resources, as test jobs can no longer be optimally split via expected test timing across pools of CI runners.
I'd suggest extending
_generate_result
with say the default argumenttest_time=0
and update the respective call sites to provide the elapsed time taken from running the subprocess command tested.ament_cmake/ament_cmake_test/ament_cmake_test/__init__.py
Line 313 in cba4e09
Some suggestions on how timing for subprocess commands may be recorded.
https://stackoverflow.com/questions/13889066/run-an-external-command-and-get-the-amount-of-cpu-it-consumed/13933797#13933797
The text was updated successfully, but these errors were encountered: