-
Notifications
You must be signed in to change notification settings - Fork 130
Improve automated tests #359
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve automated tests #359
Conversation
When forgetting to install the protobuf compiler, an error message is shown which had a missing space between two words.
|
Thank you Hannes for your contribution. It is very welcome! @vkresch will review it tomorrow |
vkresch
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the improvement! Looks nice and pythonic :). Just change one line for the newline warning and we can merge it.
vkresch
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
|
@jdsika there seems to be an issue with the status report form travis to merge this PR see here for more how to maybe solve it https://travis-ci.community/t/known-issue-travis-ci-reports-expected-waiting-for-status-to-be-reported-on-the-github-status-api-but-the-status-never-arrives/1154 |
|
I do not find the option. Very strange... |
|
@hanneskaeufler I invited you to the organization. Maybe this is the problem of the status not being properly reported. Can you accept the invite and make an empty commit (not reopen of PR) as a member? If this does not work @vkresch will copy your stuff and make a branch. |
There is no reason to traverse the file line by line if all we care about is the last character. This also uses unittests subTest to show the file which fails the test.
When one file is failing, due to the fact that all tests contain for loops, the other failures are swallowed because python stops running the test on the first assertion. With subTest one can introduce a subcontext which allows showing all of the failed files as well a diagnostic as to which file failed.
Python provides mechanisms for that, no reason to manually track a count.
The globs where repeated a whole bunch of times which is both information duplication as well as unneccessary work.
There is no reason to guard the assertions in the conditional.
Thanks! I accepted and rebased one more time ... |
Reference to a related issue in the repository
None - These are out of context quality of life improvements in the automated test suite
Add a description
I saw some long-hanging fruit in improving these tests. I hope the commit messages do
well enough to explain the change/intention behind it. If not, feel free to comment on details :)
Check the checklist