This repository has been archived by the owner on Apr 3, 2019. It is now read-only.
fix(metrics): measure request count and time in perf tests #97
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
My previous attempt at this script was just an epic fail, sorry. I should not have merged it into
master
.The main problem fixed in this PR is my desperately foolish attempt to use the duration of a fixed-period load test as a benchmark/comparison. Now it reports the number of requests and the average requests per second instead.
Another problem it fixes is a variable name that I mystifyingly changed in one place but not the other. That condition isn't entered when the load tests are run on a separate machine to the metrics queries, so I didn't spot it when I was running in EC2.
The final change is to fix the path name to the script in the comments.