Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use "status" prop and other misc improvements #27

Merged
merged 11 commits into from
Sep 28, 2022

Conversation

isaacl
Copy link
Contributor

@isaacl isaacl commented Sep 18, 2022

Use new "status" key to mark failing tests red, even if they are 0 weight.

@isaacl isaacl requested a review from a team as a code owner September 18, 2022 14:17
Use new "status" key to mark failing tests red,
even if they are 0 weight.
@isaacl isaacl force-pushed the addStatus branch 4 times, most recently from c9aee18 to a58cafe Compare September 18, 2022 16:24
@isaacl
Copy link
Contributor Author

isaacl commented Sep 18, 2022

@ibrahima ^

@isaacl isaacl changed the title Mark failing tests with "status" key Use "status" prop and other misc improvements Sep 18, 2022
@ibrahima
Copy link
Contributor

Hi @isaacl, thanks for your contribution! We'll have to take a look at it in the near future.

As a broad note, since this is the "default" public library for Gradescope autograders, we do need to make sure that any changes make sense for everybody, so we will be thinking about these changes from that perspective.

Copy link
Contributor

@ibrahima ibrahima left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your contribution, this looks great! Just had one main request.

output += '\n'
else:
output += '\n\n'
output += f'Fail: {err[1] or "<no msg>"}\n'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you revert this output change for now? Thanks! I am not sure if it makes sense generally and it would be good to isolate this change from the test status change in any case. I think adding the newlines after the test output is helpful, but I would prefer it if you could change the format of the error message back to "Test Failed: {0}\n".format(err[1]) for now.

Also, the f'string' formatting is a Python 3 feature, and the library should currently work on Python 2 otherwise so it'd be nice to maintain that. I know Python 2 is EOL and people should upgrade, but sometimes it feels like there's not a lot of time for that in education, and I wouldn't want to disrupt a Python-2-using instructor if we don't have to.

Suggested change
output += f'Fail: {err[1] or "<no msg>"}\n'
output += "Test Failed: {0}\n".format(err[1])

Also, I was under the impression that the unittest assertions generally always have some error message, does the <no msg> thing occur in practice? In any case I would expect that the autograder author should ideally be providing more helpful error messages anyway. If anything, I would prefer it to just say "Test Failed." if there is no message in err[1] rather than "Test Failed: <no msg>", it feels cleaner that way.


Incidentally, with the unittest library you can set the longMessage attribute on a test case to override the default error message formatting, so you could use that to tweak the output as well. You'd still have the "Test Failed" part, but then you could remove the default "assertion failed" message and add in extra whitespace, etc. But that wouldn't help with adding newlines between the other output and the test failure message.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Happy to split this into separate PR.

I think "Fail: " or "Failed: " is easier to parse than "Test Failed: ".

Students need to differentiate the prefix from the actual failure message and I think the space in "Test Failed" makes such differentiation harder.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, I was under the impression that the unittest assertions generally always have some error message, does the thing occur in practice?

I've never seen it, just was coding defensively since err itself can be None and wasn't sure whether err[1] could similarly be None

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Split to #29

@isaacl
Copy link
Contributor Author

isaacl commented Sep 27, 2022

Also I think we still want "if output" even if output is a str

It avoids a double newline if output string is empty.

@ibrahima
Copy link
Contributor

Also I think we still want "if output" even if output is a str

It avoids a double newline if output string is empty.

Ah good point, I didn't realize an empty string was Falsey in Python. I thought it was just None, 0, and False.

Copy link
Contributor

@ibrahima ibrahima left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@ibrahima ibrahima merged commit cc6b1ce into gradescope:master Sep 28, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants