Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pass R errors through to Conbench #143

Merged
merged 2 commits into from
Jun 22, 2023
Merged

Conversation

alistaire47
Copy link
Contributor

Closes #134. Builds on voltrondata-labs/arrowbench#138 to take results with an error element and pass it through to Conbench properly. Improves some tests along the way.

Tests are passing locally, but I want to run this against a local Conbench instance to make sure nothing is malformed before merging.

logging.exception(json.dumps(error))
return error, output
result["error"]["command"] = r_command
logging.exception(f"Errored result (not posted): {json.dumps(result)}")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Theoretically we could post these to Conbench now, but I'm not sure whether there's value in doing so. Errors within the benchmark won't hit this—this is for when things bomb out entirely somehow, e.g. you spelled the name of the benchmark wrong.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A lot of this is just rearranging; I moved the single-use assert_* function bodies directly into the tests so you can see what they're testing without navigating all over the code.

assert output is None
@pytest.mark.parametrize("error_type", ["NULL"])
@pytest.mark.parametrize("output_type", ["NULL", "'warning'"])
def test_r_only_placebo(error_type, output_type):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is new


@pytest.mark.parametrize("error_type", ["'base'"])
@pytest.mark.parametrize("output_type", ["NULL", "'warning'"])
def test_r_only_exception(error_type: str, output_type: str):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a significant refactor of an outdated test which no longer raised errors like before because arrowbench handles them properly. Now this tests that labs/benchmarks handles errors arrowbench catches properly.

@coveralls
Copy link

Pull Request Test Coverage Report for Build 5340132227

  • 95 of 95 (100.0%) changed or added relevant lines in 3 files are covered.
  • 1 unchanged line in 1 file lost coverage.
  • Overall coverage increased (+0.1%) to 87.38%

Files with Coverage Reduction New Missed Lines %
benchmarks/_benchmark.py 1 83.72%
Totals Coverage Status
Change from base Build 5268118744: 0.1%
Covered Lines: 1551
Relevant Lines: 1775

💛 - Coveralls

@alistaire47
Copy link
Contributor Author

Ran against a local Conbench instance and it posts successfully:

image

Tracebacks are too wide to read easily, and the warnings subdict doesn't get unpacked into a subtable, but these are existing UI issues. At least the data is there so users can copy it an reformat it to see what's going on.

And we'll be able to see on Conbench when benchmarks are failing!

@alistaire47
Copy link
Contributor Author

The changes here are relatively simple and well-tested, so I'm going to merge this, but post-merge reviews are welcome; happy to do a follow-up is necessary. I'll keep an eye on buildkite to see if anything goes awry.

@alistaire47 alistaire47 merged commit 8a7a4b2 into main Jun 22, 2023
2 checks passed
@alistaire47 alistaire47 deleted the edward/pass-through-r-errors branch June 22, 2023 21:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Send R errors to Conbench properly
2 participants