-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Experimental] Initial attempt at adding JUnit XML support #55
Conversation
Codecov Report
@@ Coverage Diff @@
## master #55 +/- ##
==========================================
+ Coverage 82.44% 83.31% +0.86%
==========================================
Files 3 3
Lines 735 767 +32
==========================================
+ Hits 606 639 +33
+ Misses 129 128 -1
Continue to review full report at Codecov.
|
def print_result_cache_junitxml(): | ||
test_cases = [] | ||
l = list(select(x for x in Mutant)) | ||
for filename, mutants in groupby(l, key=lambda x: x.line.sourcefile.filename): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need the groupby
here, unless we want to create separate test suites.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually we do, so that mutants are listed on the report by file (even if they are not really grouped)
This is what the rendered JUnit XML output looks like with a tool such as And this is what a |
@@ -136,6 +139,51 @@ def print_stuff(title, query): | |||
print_stuff('Untested', select(x for x in Mutant if x.status == UNTESTED)) | |||
|
|||
|
|||
def get_unified_diff(argument, dict_synonyms): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Refactoring the code that was performing the unified diff on __main__.py
, and bringing it here so that we can also use it for the JUnit report
I like it! Fairly small diff and seems pretty feature complete. The new command seems reasonable to me. As for the status of the suspicious mutants, I'm fine with having them just not be in the output. This change seems pretty ready to merge as is I think. What do you think? |
Glad you liked it! Let me add an argument to the command in order to manipulate what to do with the suspicious mutants, and then we can merge, sounds good? |
Great! |
I added a couple of parameters to change the policy for dealing with suspicious and untested mutants. A few notes for future reference:
That's it, ready to merge (I suggest "squash and merge", since there is a lot of noise on this PR)! |
Yea the codecov thing is bs and should be ignored. I should google on how to turn that off sometime :P I use RST for the documentation generation with Sphinx. You could have fixed it in this PR that's fine. I'm not a Puritan for tiny stuff like that. |
Related to #49. Most CI tools include some support for interpreting JUnit XML output, extracting information about total executed tests vs. passed, capturing stdout/stderr from test execution, etc. This is an attempt at implementing some very rudimentary support for JUnit XML in mutmut.
In order to get the report in this new format, simply run
mutmut junitxml
(similar to runningmutmut results
)Notes for discussion:
results
command, instead of a new command (it's just a different way of reporting the results).stdout
andstderr
attributes. Right now I'm simply addingMutant.line.line
, but ideally we want to show the diff of the mutant. It seems the code for generating the diff is on the__main__.py
file, so maybe that can be refactored and reused there?failure
vs.error
vsskipped
? I'm usingfailure
for surviving mutants,error
for timeouts, andskipped
for both untested and suspicious. A flag could be added to determine if suspicious mutants should be skipped or considered failures/errors