New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Brittle unit tester testing is still brittle #302

Closed
tngreene opened this Issue Sep 5, 2017 · 8 comments

Comments

Projects
None yet
1 participant
@tngreene
Copy link
Collaborator

tngreene commented Sep 5, 2017

How the unit tester decides if there was an error has gone through many iterations - each hoping to not produce false negatives or positives.

The latest iteration (7a96f1d)

``python
#First line of output is unittest's sequece of dots, E's and F's
if 'E' in out_lines[0] or 'F' in out_lines[0] or num_errors != 0:


Hoped to use the first line's series of .'s and E's would finally solve the problem, but it turns out that that is brittle too!

Sometimes an exception can be thrown making the first line be "try".

The order of the output of the test has always seemed incredibly strange and out of order, but surely there is a pattern we can count on.

Another solution may be to see if we can get the actual test result class back instead of just the output and we can just find a test results errors with its ``errors`` member. https://docs.python.org/3/library/unittest.html?highlight=testresult#unittest.TestResult

@tngreene tngreene added this to the v3.4.0-beta milestone Sep 5, 2017

@tngreene tngreene self-assigned this Sep 5, 2017

@tngreene

This comment has been minimized.

Copy link
Collaborator

tngreene commented Nov 22, 2017

Good news! The answer probably lies in
tests/init.py's runTestCases function

    for testCase in testCases:    
        suite = unittest.defaultTestLoader.loadTestsFromTestCase(testCase)
        expected_logger_errors += testCase.expected_logger_errors
        expected_logger_warnings += testCase.expected_logger_warnings
    
    unittest.TextTestRunner().run(suite)

We can customize our TestRunner to produce less garbage, and use the return value of run to directly get the information we want out of it.

In addition, we may be able to use the built in expectedFailures and unexpectedSuccesses instead of our own hack that was half used and now currently abandoned.

Even without the ability to completely control the output, with a captured TestResult we'll be able to come up with our own smartly written-stupid string to compare to!

Alternatively, we look up a way to share some chunk of memory (virtual, real, or disk) to pass the information between the processes and there won't be any more string matching ever!

@tngreene

This comment has been minimized.

Copy link
Collaborator

tngreene commented Nov 22, 2017

Ideas for the much more fun - non-string parsing way

My guess is that we'll go with more string crunching until it fails again and breakdown and do the fun things.

@tngreene tngreene modified the milestones: v3.4.0-beta, v3.4 Dec 23, 2017

@tngreene

This comment has been minimized.

Copy link
Collaborator

tngreene commented Jan 30, 2018

Another problem: Currently we say "Assert there was 1 error", but it could be any error. As I've found in fixing translation_bone_rules and rotation_bone_rules false positives can arise:
If error B is what you're expecting,

  • Error A can occur first, hiding whether error B occured at all or not or would have occured at all.
  • Error B can not occur, but error C does occur causing "num_errors == 1" to be true

We could really use persistent error codes like we have in WED.

@tngreene

This comment has been minimized.

Copy link
Collaborator

tngreene commented Feb 6, 2018

Also! The logger doesn't get cleared automatically between test cases. This was discovered around the time of fa80228 with rotation_bone_rules.test.py

test_5 was expecting to test against a fixture. When it failed, test_6 also failed, despite passing normally because the logger still had left over errors. See also #303

@tngreene

This comment has been minimized.

Copy link
Collaborator

tngreene commented Mar 21, 2018

In addition! It would be really nice to be able to filter the output a bit to remove Blender's miscellaneous output of irrelevant debug messages. The addition of a -v[vv] or --verbose [level] to control this would be excellent. The ability to control the output entirely could be a potential solution without the above descriptions!

These seem relevant to the problem

See #408

@tngreene

This comment has been minimized.

Copy link
Collaborator

tngreene commented Aug 6, 2018

Another burn today: Good news, there are two new ways to strength the unit tester without much work

  1. Remove allowing the LOGGER string to be None.
  2. Use the test runner's result to get the errors and failures not picked up by our XPlaneLogger.
  3. It would also be a good idea to remove our (brief) mentions of "expected_errors" because test cases already have that and we shouldn't re-invent wheels.
@tngreene

This comment has been minimized.

Copy link
Collaborator

tngreene commented Aug 7, 2018

Also! We could replace the logger with logger! That would be fun and create a lot better filtering for debug during unit testing. Perhaps it could even help with printing the outcomes of the tests as well.

tngreene added a commit that referenced this issue Aug 13, 2018

Another addition to #302 (we can now access the test runner args insi…
…de the TestCase) and #319 (The stupid xplane_config.debug/log is now deleted)

tngreene added a commit that referenced this issue Aug 14, 2018

Fixes #302 and greatly greatly greatly improves the test runner! We r…
…emoved the useless expected_logger_errors, used the TestResult data, print out final warning messages, simplified the test code, removed the very very very problematic string crunching code (dot line E and F stupidity), gave __init__.py access to the args passed into tests.py, have small profiling info about how long it takes to run a test.
@tngreene

This comment has been minimized.

Copy link
Collaborator

tngreene commented Aug 14, 2018

Closing and splitting the other bits into smaller bugs!

@tngreene tngreene closed this Aug 14, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment