New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Brittle unit tester testing is still brittle #302
Comments
Good news! The answer probably lies in for testCase in testCases:
suite = unittest.defaultTestLoader.loadTestsFromTestCase(testCase)
expected_logger_errors += testCase.expected_logger_errors
expected_logger_warnings += testCase.expected_logger_warnings
unittest.TextTestRunner().run(suite) We can customize our TestRunner to produce less garbage, and use the return value of run to directly get the information we want out of it. In addition, we may be able to use the built in expectedFailures and unexpectedSuccesses instead of our own hack that was half used and now currently abandoned. Even without the ability to completely control the output, with a captured TestResult we'll be able to come up with our own smartly written-stupid string to compare to! Alternatively, we look up a way to share some chunk of memory (virtual, real, or disk) to pass the information between the processes and there won't be any more string matching ever! |
Ideas for the much more fun - non-string parsing way
My guess is that we'll go with more string crunching until it fails again and breakdown and do the fun things. |
Another problem: Currently we say "Assert there was 1 error", but it could be any error. As I've found in fixing translation_bone_rules and rotation_bone_rules false positives can arise:
We could really use persistent error codes like we have in WED. |
Also! The logger doesn't get cleared automatically between test cases. This was discovered around the time of fa80228 with rotation_bone_rules.test.py test_5 was expecting to test against a fixture. When it failed, test_6 also failed, despite passing normally because the logger still had left over errors. See also #303 |
In addition! It would be really nice to be able to filter the output a bit to remove Blender's miscellaneous output of irrelevant debug messages. The addition of a These seem relevant to the problem
See #408 |
Another burn today: Good news, there are two new ways to strength the unit tester without much work
|
Also! We could replace the logger with |
…de the TestCase) and #319 (The stupid xplane_config.debug/log is now deleted)
…emoved the useless expected_logger_errors, used the TestResult data, print out final warning messages, simplified the test code, removed the very very very problematic string crunching code (dot line E and F stupidity), gave __init__.py access to the args passed into tests.py, have small profiling info about how long it takes to run a test.
Closing and splitting the other bits into smaller bugs! |
How the unit tester decides if there was an error has gone through many iterations - each hoping to not produce false negatives or positives.
The latest iteration (7a96f1d)
``python
#First line of output is unittest's sequece of dots, E's and F's
if 'E' in out_lines[0] or 'F' in out_lines[0] or num_errors != 0:
The text was updated successfully, but these errors were encountered: