Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add py.test-like "-k" test selection to unittest #76252

Closed
jonashaag mannequin opened this issue Nov 18, 2017 · 19 comments
Closed

Add py.test-like "-k" test selection to unittest #76252

jonashaag mannequin opened this issue Nov 18, 2017 · 19 comments
Labels
3.7 stdlib type-feature

Comments

@jonashaag
Copy link
Mannequin

@jonashaag jonashaag mannequin commented Nov 18, 2017

BPO 32071
Nosy @pitrou, @vstinner, @rbtcollins, @ezio-melotti, @voidspace, @jonashaag, @timgraham
PRs
  • #4496
  • #4589
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields:

    assignee = None
    closed_at = <Date 2017-11-25.15:24:31.636>
    created_at = <Date 2017-11-18.18:12:40.940>
    labels = ['3.7', 'type-feature', 'library']
    title = 'Add py.test-like "-k" test selection to unittest'
    updated_at = <Date 2017-11-28.19:40:51.429>
    user = 'https://github.com/jonashaag'

    bugs.python.org fields:

    activity = <Date 2017-11-28.19:40:51.429>
    actor = 'pitrou'
    assignee = 'none'
    closed = True
    closed_date = <Date 2017-11-25.15:24:31.636>
    closer = 'pitrou'
    components = ['Library (Lib)']
    creation = <Date 2017-11-18.18:12:40.940>
    creator = 'jonash'
    dependencies = []
    files = []
    hgrepos = []
    issue_num = 32071
    keywords = ['patch']
    message_count = 19.0
    messages = ['306490', '306492', '306495', '306584', '306593', '306596', '306598', '306599', '306601', '306687', '306959', '306960', '307041', '307046', '307056', '307062', '307063', '307070', '307156']
    nosy_count = 7.0
    nosy_names = ['pitrou', 'vstinner', 'rbcollins', 'ezio.melotti', 'michael.foord', 'jonash', 'Tim.Graham']
    pr_nums = ['4496', '4589']
    priority = 'normal'
    resolution = 'fixed'
    stage = 'resolved'
    status = 'closed'
    superseder = None
    type = 'enhancement'
    url = 'https://bugs.python.org/issue32071'
    versions = ['Python 3.7']

    @jonashaag
    Copy link
    Mannequin Author

    @jonashaag jonashaag mannequin commented Nov 18, 2017

    I'd like to add test selection based on parts of the test class/method name to unittest. Similar to py.test's "-k" option: https://docs.pytest.org/en/latest/example/markers.html#using-k-expr-to-select-tests-based-on-their-name

    Here's a proof of concept implementation: jonashaag/cpython@master...unittest-select

    Is this something others find useful as well? If so, I'd like to work on getting this into Python stdlib proper. This is my first time contributing to the unittest framework; is the general approach taken in my PoC implementation correct in terms of abstractions? How can I improve the implementation?

    Jonas

    @jonashaag jonashaag mannequin added stdlib type-feature labels Nov 18, 2017
    @jonashaag
    Copy link
    Mannequin Author

    @jonashaag jonashaag mannequin commented Nov 18, 2017

    Just to be clear, the current implementation is limited to substring matches. It doesn't support py.test like "and/or" combinators. (Actually, py.test uses 'eval' to support arbitrary patterns.)

    So say we have test case

    SomeClass
    test_foo
    test_bar

    Then

    • python -m unittest -k fo matches "test_foo"
    • python -m unittest -k Some matches "test_foo" and "test_bar"
    • python -m unittest -k some matches nothing

    The -k option may be used multiple times, combining the patterns with "or":

    • python -m unittest -k fo -k b matches "test_foo" and "test_bar"

    It's also possible to use glob-style patterns, like -k "spam_*_eggs".

    @pitrou
    Copy link
    Member

    @pitrou pitrou commented Nov 18, 2017

    I think this would be a useful enhancement. Feel free to open a PR. I haven't looked at your implementation, but you'll also need to add tests for it.

    @pitrou pitrou added the 3.7 label Nov 18, 2017
    @jonashaag
    Copy link
    Mannequin Author

    @jonashaag jonashaag mannequin commented Nov 20, 2017

    Thanks Antoine. I will need some guidance as to what are the correct places to make these changes. I'm not quite sure about the abstractions here (runner, loader, suite, case, etc.)

    My PoC (see GitHub link in first post) uses a TestSuite subclass. (The subclass is only so that it's easier to assess the general implementation approach; I guess it should be put into the main class instead.)

    Things I'm unsure of:

    1. Is suite the correct place for this kind of feature?
    2. Is the hardcoded fnmatch-based pattern matcher ok, or do we need a new abstraction "NameMatcher"?
    3. Is the approach of dynamically wrapping 'skip()' around to-be-skipped test cases OK?
    4. The try...catch statement around 'test.id()' is needed because there are some unit tests (unit tests for the unittest module itself) that check for some error cases/error handling in the unittest framework, and crash if we try to call '.id()' on them. Please remove the try...catch to see these errors if you're interested in the details. Is the check OK like that, or is this a code smell?

    Thanks
    Jonas

    @pitrou
    Copy link
    Member

    @pitrou pitrou commented Nov 20, 2017

    First, I should clarify that I'm not a unittest maintainer. However, as far as I can tell, the maintainers have not been very active lately. Also, this is a reasonably simple enhancement (at least conceptually), so I think can do without a maintainer's formal approval.

    To your questions:

    1. Is suite the correct place for this kind of feature?

    At its core, TestSuite is really a glorified collection of tests. The selection logic is inside the TestLoader, and indeed the TestLoader docstring says:

    """This class is responsible for loading tests according to various criteria and returning them wrapped in a TestSuite"""

    So I would recommend putting this in TestLoader.

    1. Is the hardcoded fnmatch-based pattern matcher ok, or do we need a new abstraction "NameMatcher"?

    I think the hardcoded approach is ok. Though to benefit readability you may want to use a predicate function instead.

    1. Is the approach of dynamically wrapping 'skip()' around to-be-skipped test cases OK?

    I think this is the wrong approach. A test that isn't selected shouldn't be skipped, it should not appear in the output at all. Another reason for putting this in TestLoader :-)

    1. The try...catch statement around 'test.id()' is needed because there are some unit tests (unit tests for the unittest module itself) that check for some error cases/error handling in the unittest framework, and crash if we try to call '.id()' on them

    I'd ask the question differently: do you need to call .id() to do the matching at all? Intuitively, the TestLoader probably knows about the test names already, even before it instantiates TestCases for them. TestCases shouldn't be instantiated for tests that are not selected.

    @jonashaag
    Copy link
    Mannequin Author

    @jonashaag jonashaag mannequin commented Nov 20, 2017

    > 3) Is the approach of dynamically wrapping 'skip()' around to-be-skipped test cases OK?

    I think this is the wrong approach. A test that isn't selected shouldn't be skipped, it should not appear in the output at all. Another reason for putting this in TestLoader :-)

    My first implementation actually was mostly the test loader. Two things made me change my mind and try to make the changes in the suite code:

    • The loader code really only deals with loading (i.e., finding + importing) tests. Yes it expects a file pattern like "test*.py" for identifying test case files. But apart from that it didn't "feel" right to put name based selection there.
    • In py.test you'll get a console output like "5 tests passed, 1 test failed, 12 tests deselected". We can't get anything similar without making bigger changes to the test loader, runner, etc. code. Using skip() we at least have some info on "skipped" tests, although they're technically not skipped.

    Are you still saying this should go to the test loader?

    @pitrou
    Copy link
    Member

    @pitrou pitrou commented Nov 20, 2017

    Le 21/11/2017 à 00:23, Jonas H. a écrit :

    • The loader code really only deals with loading (i.e., finding + importing) tests. Yes it expects a file pattern like "test*.py" for identifying test case files. But apart from that it didn't "feel" right to put name based selection there.

    Take a look at TestLoader.loadTestsFromName(). It does much more than
    look up test files, it can also look up individual classes or methods.

    • In py.test you'll get a console output like "5 tests passed, 1 test failed, 12 tests deselected". We can't get anything similar without making bigger changes to the test loader, runner, etc. code. Using skip() we at least have some info on "skipped" tests, although they're technically not skipped.

    I'm not sure what the point of displaying the number of "deselected"
    tests is: that information doesn't sound very useful in itself.

    But in any case, marking them skipped feels wrong to me. Most often,
    skipping a test is an indication that the current system is not fit to
    run it (such as not enough RAM, or lacking a third-party library, or
    being the wrong OS entirely).

    Are you still saying this should go to the test loader?

    IMHO, yes. Looking at your posted implementation, at least I don't
    think TestSuite is the place for it.

    @pitrou
    Copy link
    Member

    @pitrou pitrou commented Nov 20, 2017

    If it helps, think of TestLoader as collecting tests, rather than simply loading Python modules.

    @vstinner
    Copy link
    Member

    @vstinner vstinner commented Nov 20, 2017

    The internal Python test runner "regrtest" already implements the feature as the -m MATCH option and --matchfile FILENAME option. It's implemented in test.support._match_file().

    See bpo-31324 for a recent issue on this feature: performance issue (it's going to be fixed).

    @jonashaag
    Copy link
    Mannequin Author

    @jonashaag jonashaag mannequin commented Nov 21, 2017

    Interesting, Victor. I've had a look at the code you mentioned, but I'm afraid it doesn't really make sense to re-use any of the code.

    Here's a new patch, implemented in the loader as suggested by Antoine, and with tests.

    I'm happy to write documentation etc. once we're through with code review.

    #4496

    @pitrou
    Copy link
    Member

    @pitrou pitrou commented Nov 25, 2017

    New changeset 5b48dc6 by Antoine Pitrou (Jonas Haag) in branch 'master':
    bpo-32071: Add unittest -k option (bpo-4496)
    5b48dc6

    @pitrou
    Copy link
    Member

    @pitrou pitrou commented Nov 25, 2017

    The enhancement is now pushed to git master. Thank you Jonas for contributing this!

    @pitrou pitrou closed this as completed Nov 25, 2017
    @vstinner
    Copy link
    Member

    @vstinner vstinner commented Nov 27, 2017

    IMHO it's a big enhancement for the base unittest test runner and deserves to be documend in What's New in Python 3.7! Jonas: do you want to write a PR to patch Doc/whatsnew/3.7.rst?

    @jonashaag
    Copy link
    Mannequin Author

    @jonashaag jonashaag mannequin commented Nov 27, 2017

    Sure!

    @timgraham
    Copy link
    Mannequin

    @timgraham timgraham mannequin commented Nov 27, 2017

    This is appearing as a backwards incompatible change for Django because test case class attributes are now evaluated at load time before setUpClass runs (which initializes some things that those class attributes use). It's possible to adapt Django for this change, but it might affect other projects as well.

    Traceback from Django's tests:

    $ python -Wall tests/runtests.py 
    Testing against Django installed in '/home/tim/code/django/django' with up to 3 processes
    Traceback (most recent call last):
      File "tests/runtests.py", line 477, in <module>
        options.exclude_tags,
      File "tests/runtests.py", line 282, in django_tests
        extra_tests=extra_tests,
      File "/home/tim/code/django/django/test/runner.py", line 598, in run_tests
        suite = self.build_suite(test_labels, extra_tests)
      File "/home/tim/code/django/django/test/runner.py", line 513, in build_suite
        tests = self.test_loader.discover(start_dir=label, **kwargs)
      File "/home/tim/code/cpython/Lib/unittest/loader.py", line 346, in discover
        tests = list(self._find_tests(start_dir, pattern))
      File "/home/tim/code/cpython/Lib/unittest/loader.py", line 403, in _find_tests
        full_path, pattern, namespace)
      File "/home/tim/code/cpython/Lib/unittest/loader.py", line 457, in _find_test_path
        return self.loadTestsFromModule(module, pattern=pattern), False
      File "/home/tim/code/cpython/Lib/unittest/loader.py", line 124, in loadTestsFromModule
        tests.append(self.loadTestsFromTestCase(obj))
      File "/home/tim/code/cpython/Lib/unittest/loader.py", line 90, in loadTestsFromTestCase
        testCaseNames = self.getTestCaseNames(testCaseClass)
      File "/home/tim/code/cpython/Lib/unittest/loader.py", line 234, in getTestCaseNames
        testFnNames = list(filter(shouldIncludeMethod, dir(testCaseClass)))
      File "/home/tim/code/cpython/Lib/unittest/loader.py", line 227, in shouldIncludeMethod
        testFunc = getattr(testCaseClass, attrname)
      File "/home/tim/code/django/django/utils/decorators.py", line 172, in __get__
        return self.fget(cls)
      File "/home/tim/code/django/django/test/testcases.py", line 1269, in live_server_url
        return 'http://%s:%s' % (cls.host, cls.server_thread.port)
    AttributeError: type object 'AdminSeleniumTestCase' has no attribute 'server_thread'

    @jonashaag
    Copy link
    Mannequin Author

    @jonashaag jonashaag mannequin commented Nov 27, 2017

    Ah, the problem isn't that it's running getattr() on test methods, but that it runs getattr() on all methods.

    Former code: attrname.startswith(prefix) and \
    callable(getattr(testCaseClass, attrname))

    New code: testFunc = getattr(testCaseClass, attrname)
    isTestMethod = attrname.startswith(self.testMethodPrefix) and callable(testFunc)

    This is trivial to fix. @core devs: Should I revert to original behaviour with the order of the prefix check and the getattr() call, and add a regression test that guarantees this behaviour?

    @pitrou
    Copy link
    Member

    @pitrou pitrou commented Nov 27, 2017

    @core devs: Should I revert to original behaviour with the order of the prefix check and the getattr() call, and add a regression test that guarantees this behaviour?

    Yes, that sounds reasonable to me.

    @jonashaag
    Copy link
    Mannequin Author

    @jonashaag jonashaag mannequin commented Nov 27, 2017

    #4589

    • Add 3.7 What's New entry
    • Fix regression (thanks Tim for the report)

    @pitrou
    Copy link
    Member

    @pitrou pitrou commented Nov 28, 2017

    New changeset 4d193bc by Antoine Pitrou (Jonas Haag) in branch 'master':
    bpo-32071: Fix regression and add What's New entry (bpo-4589)
    4d193bc

    @ezio-melotti ezio-melotti transferred this issue from another repository Apr 10, 2022
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    3.7 stdlib type-feature
    Projects
    None yet
    Development

    No branches or pull requests

    2 participants