Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow expectThat to return True/False if the condition passed or failed. #239

Open
blakev opened this issue Sep 16, 2016 · 7 comments
Open

Comments

@blakev
Copy link

blakev commented Sep 16, 2016

Because expectThat does a differed assertion, it would be nice to be able to know while the TestCase is running whether or not the condition passed for advanced logging or logic.

Our use case is using expectThat matchers in long running tests that need to log the success AND failures of those checks, not just the ones that failed at the end of the run (returned in getDetails()).

@thomir
Copy link
Member

thomir commented Sep 18, 2016

HI @blakev,

I don't understand the need for this change. perhaps you could show an example test where the existing functionality makes things awkward and this feature would help?

Generally speaking, I think adding flow control (if/else/etc) statements to a test is a bad idea: It becomes harder to reason about what was tested by looking at the code. If you find yourself needing to do this, there's almost certainly a better way - perhaps by splitting the tests into two or more tests that test each path separately.

Cheers,

@blakev
Copy link
Author

blakev commented Sep 18, 2016

expectThat isn't actually asserting anything while the test is running, instead they're collected at the end of the method and reported all at the same time. Because code will continue executing after it's been called you can only get the details of the expectThat at the end. The test has no conditional logic, instead the logging/reporting metrics need to know which expects passed and which failed.

The updated return statement doesn't break anything, because you wouldn't be calling the method to expect a None no matter what.

@thomir
Copy link
Member

thomir commented Sep 19, 2016

Hi @blakev,

You are correct about the return value - that's what happens when you review code before your first coffee in the morning, sorry!

I still don't understand the rationale for wanting this. You say "The test has no conditional logic, instead the logging/reporting metrics need to know which expects passed and which failed." - which reporting mechanisms, exactly? Perhaps you could show us what you're ultimately trying to achieve here? Some example code would help, I think.

@blakev
Copy link
Author

blakev commented Sep 19, 2016

Consider a per-test-method tear down that logs externally:

    def expectThat(self, matchee, matcher, message='', verbose=False):
        """ An assertion helper method.

            Args:
                matchee: An object to match with matcher.
                matcher: An object meeting the testtools.Matcher protocol.
                message (str)
                verbose (bool)

            Returns:
                None
        """
        self.logger.debug('called expectThat, ' + message)
        # broad wrapper around TestTools `expectThat` method for additional
        # conditions and logging.
        super(CouncilTestCase, self).expectThat(matchee, matcher, message, verbose)

    def tearDown(self):
        """ TearDown a test method defined in a TestCase class. """

        self.logger.debug('test-case teardown')

        interfaces = self._result_interfaces()

        if interfaces:
            self.logger.debug('logging pass/fail details to (%d) interfaces' % len(interfaces))

            # if we have a results instance class defined
            # then we want to log these test results to that database
            # using its interface.

            details = self.formatted_details(misc=self._misc_to_dict())

            if self.failed:
                row_id = self.__log_failed(details, interfaces)
                self.push_status('Failed', {'done': True, 'passed': False}, 100)
            else:
                row_id = self.__log_success(details, interfaces)
                self.push_status('Passed', {'done': True, 'passed': True}, 1)

            self.push_status('Logged to database', {'database_rowid': row_id})

        self._auto_teardown(None)
        self._with_driver = False
        self._clear_misc(delete_persist=False)

        super(CouncilTestCase, self).tearDown()

You know whether or not the whole method passed using standard testtools.TestCase idioms -- but, we don't know which expectThat methods passed/failed. We can infer which ones failed by pulling them from .getDetails().

It would be much easier to know, per expectThat call if the deferred assertion was a success or failure while the test is running; for long running tests that may not even get to the end of the method.

By simply returning True or False it would avoid the infer step, as well as parsing the messages out of .getDetails()

        for key, value in detail_items:
            # iterate through all of the details
            # trying to assign them in buckets with like-typed objects

            if key.lower().startswith(config.mismatch_error_str):
                # filter out all of the MismatchErrors, logged by a call
                # to :fun:`.expectThat`.
                split_msg = value.as_text().split(':', 2)

                if len(split_msg) == 2:
                    # no message parameter passed, fill with a blank
                    split_msg.append('')

                # take of all the white spacing
                split_msg = list(map(lambda x: unicode(x).strip(), split_msg))

                _, err_str, msg_str = split_msg

                mismatches.append('%s,%s' % (err_str, msg_str))
                continue

@rbtcollins
Copy link
Member

If I may summarise to see if I understand the crux of the issue:

  • you wish to capture additional details of successes to an external test tracking system ?

Sure, you can do that with something like your patch, and I'm ok with something like that being added - though we should take the time to consider the impact on async tests, where futures are a big part of the API.

I'm not sure if you are aware though, but tearDown is not guaranteed to run - if setUp fails, tearDown does not run. Only cleanUps are guaranteed to run. The reasons for this are fundamentally backwards compatibility, though cleanUps can provide stronger guarantees in part due to Python's cooperative subclassing behaviour (can't guarantee that someone remembers to upcall for tearDown).

So I wonder if it would be useful perhaps to decouple the two considerations you have:

  • capture success metadata
  • capture to an external tracking system

For the former, addDetails will capture arbitrary (and in principle arbitrarily-sized, though I'm sure we have not got it streaming and/or/buffering everywhere as yet) attachments on both success and failure (the default UI just doesn't show anything if the test case succeeds). For the latter a custom StreamResult object - such as subunit.run uses - would allow you to gather structured (e.g. via JSON) and unstructured (e.g. log messages) to an external tracking system. You can use a Y adapter to both capture to such a system and show a text UI to folk running the tests interactively. One advantage of doing it this way is that your test code would not be responsible for capturing to the external system, and so would neither forget to do so, nor need awkward ways to control it externally. OpenStack uses subunit2sql as part of its test aggregation and reporting infrastructure, and thats a very similar use case.

Anyhow, I don't intend to try and force you to solve the problem the way I would ;) - so lets focus on the review at hand.

@freeekanayaka
Copy link
Member

@blakev is the summary that Robert provided a good representation of the issue at hand? And in case it is, would you be happy to go down that path (albeit surely requires more work on your side), or would you like to move forward with the PR attached to this issue? In the latter case, please can you address the review comments from Robert?

@blakev
Copy link
Author

blakev commented Nov 28, 2016

@freeekanayaka I implemented his suggestion in the project I was working on and forgot to go back and apply it upstream. I'll work on it this week and resubmit a pull request basically mirroring @rbtcollins outline.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants