-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(test results): error message PR comment if no successful parse #358
Conversation
🔍 Existing Issues For ReviewYour pull request is modifying functions with the following pre-existing issues: 📄 File: tasks/test_results_finisher.py
Did you find this useful? React with a 👍 or 👎 |
Codecov ReportAttention: Patch coverage is ✅ All tests successful. No failed tests found. @@ Coverage Diff @@
## main #358 +/- ##
=======================================
Coverage 97.44% 97.44%
=======================================
Files 395 395
Lines 33422 33514 +92
=======================================
+ Hits 32568 32658 +90
- Misses 854 856 +2
Flags with carried forward coverage won't be shown. Click here to find out more.
|
Codecov ReportAttention: Patch coverage is
✅ All tests successful. No failed tests found. Additional details and impacted files@@ Coverage Diff @@
## main #358 +/- ##
==========================================
- Coverage 97.44% 97.44% -0.01%
==========================================
Files 395 395
Lines 33430 33514 +84
==========================================
+ Hits 32577 32658 +81
- Misses 853 856 +3
Flags with carried forward coverage won't be shown. Click here to find out more.
|
Codecov ReportAttention: Patch coverage is
✅ All tests successful. No failed tests found @@ Coverage Diff @@
## main #358 +/- ##
==========================================
- Coverage 97.44% 97.44% -0.01%
==========================================
Files 395 395
Lines 33430 33514 +84
==========================================
+ Hits 32577 32658 +81
- Misses 853 856 +3
Flags with carried forward coverage won't be shown. Click here to find out more.
|
Codecov ReportAttention: Patch coverage is
Changes have been made to critical files, which contain lines commonly executed in production. Learn more ✅ All tests successful. No failed tests found. Additional details and impacted files@@ Coverage Diff @@
## main #358 +/- ##
==========================================
- Coverage 97.47% 97.47% -0.01%
==========================================
Files 426 426
Lines 34121 34205 +84
==========================================
+ Hits 33259 33340 +81
- Misses 862 865 +3
Flags with carried forward coverage won't be shown. Click here to find out more.
... and 17 files with indirect coverage changes
|
c2afcbb
to
51268e3
Compare
dd7aa1b
to
aab542a
Compare
8dd6d4f
to
7bf5756
Compare
services/comparison/__init__.py
Outdated
@@ -29,6 +29,7 @@ class NotificationContext(object): | |||
"""Extra information not necessarily related to coverage that may affect notifications""" | |||
|
|||
all_tests_passed: bool | |||
test_results_error: TestResultsProcessingError | None = False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't the default value be None
? To follow the type annotation.
yield ( | ||
":white_check_mark: All tests successful. No failed tests found :relaxed:" | ||
) | ||
if comparison.test_results_error(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So... if there are no tests then we assume all tests passed? Crazy... Can we fix that? Maybe by checking that there's at least 1 test in the "all tests passed".
I didn't realize this was a possibility before 😅
7bf5756
to
1fc9b1b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lots of changes here. Looks good in general.
I did leave a comment I'd like to see fixed before merging tho 🙏
@@ -47,10 +47,16 @@ def header_lines(self, comparison: ComparisonProxy, diff, settings) -> List[str] | |||
hide_project_coverage = settings.get("hide_project_coverage", False) | |||
if hide_project_coverage: | |||
if comparison.all_tests_passed(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't this bit follow the same logic as above? For consistency
if comparison.test_results_error():
...
elif comparison.all_tests_passed():
...
Signed-off-by: joseph-sentry <joseph.sawaya@sentry.io>
Signed-off-by: joseph-sentry <joseph.sawaya@sentry.io>
This commit: - adds the TestResultsProcessingError enum - adds the enum as a nullable field in the test results total - in the test results finisher, when there are no succesful parses, it writes to the error field in the totals - it immediately writes the error message to the comment, then queues up a notify - in the notify step, it will read from the totals to check if there was a test result processing error, if there was, let the user know by displaying it in the coverage comment, similary to how we show succesful parsing of test results
The logic for doing this is as follows: If there are no successful test results processing tasks this chord then we check if there were previously test instances that were processed for this commit. If there are previous test instances on this commit, there is no error, and we just exit this finisher without doing anything. If there are no previous test instances on this commit, then we should notify the user that there is an error. So we will begin by doing a test results error comment which contains no info about coverage, then we will queue up the notify task so it can create a coverage comment that still contains this error's information. If there are successful test results processing tasks, then we should get rid of the error on the test results totals for this commit. Signed-off-by: joseph-sentry <joseph.sawaya@sentry.io>
If there has been any success in processing for a given commit, it's associated test result totals should have the error set to None, otherwise it should be NO_SUCCESS. The condition for all_tests_passed is: if the error is None and failed = 0 in the totals then all tests passed. In the comment, if there's an error, show the error comment, else if all tests passed show the all tests passed message. If neither of these are true and the user HAS uploaded test results, then it means that there must have been failures, or they somehow skipped all their tests. Signed-off-by: joseph-sentry <joseph.sawaya@sentry.io>
Signed-off-by: joseph-sentry <joseph.sawaya@sentry.io>
0b895c4
to
e8b369a
Compare
Fixes: codecov/test-results-action#15