Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

stestr load run time output doesn't reflect reality #119

Closed
mtreinish opened this issue Oct 19, 2017 · 0 comments
Closed

stestr load run time output doesn't reflect reality #119

mtreinish opened this issue Oct 19, 2017 · 0 comments
Labels

Comments

@mtreinish
Copy link
Owner

When you run stestr load on a stream not being processed in real time with stestr run (or via stdin with a subunit emitting test run) the run time included in the output generated via the subunit_trace module doesn't reflect reality. Instead it is the time it took to process the subunit stream which isn't accurate. We should update the subunit trace module to figure out the run time from the timestamps inside the stream instead of taking 2 timestamps (one before the other after) around the subunit processing to figure out the run time.

@mtreinish mtreinish added the bug label Oct 19, 2017
mtreinish added a commit that referenced this issue Oct 3, 2018
This commit updates the subunit_trace module to leverage the timestamps
reported in the subunit instead of the wall time for the elapsed time
output. When we were using the wall time if we were just reading a
subunit file this would be the time it took to read an process the
subunit data. Which is not the actual run time of the test. By switching
to use the elapsed time between the first start timestamp and the last
stop timestamp we'll always get an accurate value. This does come with
the tradeoff of not counting any setUp before the start time or
tearDown/cleanUp after the last test, since that data isn't in the
subunit stream. But that feels like a worthwhile compromise for having
moderately accurate numbers all the time.

Fixes #119
mtreinish added a commit that referenced this issue Oct 3, 2018
This commit updates the subunit_trace module to leverage the timestamps
reported in the subunit instead of the wall time for the elapsed time
output. When we were using the wall time if we were just reading a
subunit file this would be the time it took to read an process the
subunit data. Which is not the actual run time of the test. By switching
to use the elapsed time between the first start timestamp and the last
stop timestamp we'll always get an accurate value. This does come with
the tradeoff of not counting any setUp before the start time or
tearDown/cleanUp after the last test, since that data isn't in the
subunit stream. But that feels like a worthwhile compromise for having
moderately accurate numbers all the time.

Fixes #119
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant