-
Notifications
You must be signed in to change notification settings - Fork 553
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tests: Make the OMB validator use result.json #16086
Conversation
/cdt |
Oh nice, didn't know we had those. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@@ -317,7 +317,18 @@ def check_succeed(self, validate_metrics=True): | |||
self.raise_on_bad_log_lines(node) | |||
# Generate charts from the result | |||
self.logger.info(f"Generating charts with command {self.chart_cmd}") | |||
metrics = json.loads(self.node.account.ssh_output(self.chart_cmd)) | |||
self.node.account.ssh_output(self.chart_cmd) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we still get the chart in the artifact, I guess?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep correct
self.node.account.ssh_output( | ||
f'cat {OpenMessagingBenchmark.RESULT_FILE}')) | ||
|
||
# Previously we were using generate_charts.py to get the metrics which |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Huh, maybe that's why it was being used...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes possibly, though I could also see a case where we were already using it and then wanted the new metric and just add it to generate_charts directly.
|
||
for key in validator.keys(): | ||
if key not in metrics: | ||
assert False, f"Missing requested validator key {key} in metrics" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: this could just be assert key in metrics, ...
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Of course yes, fixed.
The OMB validator was not validating p999 even though it was specified in the validator requirements. There was two reasons for that: - We were using the data from the output of the `generate_charts` command which is incomplete and doesn't return p999 e2e latency - This wasn't fatal because the validator logic silently failed to validate requested validation metrics if they were not in the metrics output This patch fixes both by switching to actually using the `result.json` file and further invert the validation loop such that it errors out if a requested validation metric does not exist.
b3d8352
fd74146
to
b3d8352
Compare
ducktape was retried in https://buildkite.com/redpanda/redpanda/builds/43768#018d0cc9-4e8a-4036-953e-c757fa6560fc ducktape was retried in https://buildkite.com/redpanda/redpanda/builds/43768#018d0ccf-57be-4b11-bba7-9385a6f20b28 |
Failure is: #16026 |
The OMB validator was not validating p999 even though it was specified
in the validator requirements.
There was two reasons for that:
generate_charts
command which is incomplete and doesn't return p999 e2e latency
validate requested validation metrics if they were not in the metrics
output
This patch fixes both by switching to actually using the
result.json
file and further invert the validation loop such that it errors out if a
requested validation metric does not exist.
Backports Required
Release Notes