Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Selecting a subset of tests to run #155

Closed
wholtz opened this issue Dec 2, 2022 · 4 comments
Closed

Selecting a subset of tests to run #155

wholtz opened this issue Dec 2, 2022 · 4 comments

Comments

@wholtz
Copy link
Contributor

wholtz commented Dec 2, 2022

The pytest docs list multiple ways of selecting a subset of tests to run.

One of these methods is to run pytest -k keyword:

$ cat bar/tests/test_bar.yml 
- name: touch bar.txt
  command: touch bar.txt
$ cat foo/tests/test_foo.yml 
- name: touch foo.txt
  command: touch foo.txt
$ rm -rf /tmp/example
$ pytest --basetemp /tmp/example --kwd -k foo 
========================================================= test session starts =========================================================
platform linux -- Python 3.11.0, pytest-7.2.0, pluggy-1.0.0
rootdir: /Users/username/projects/pytest_workflow01
plugins: workflow-1.7.0.dev0
collecting ... 
collected 2 items / 1 deselected / 1 selected                                                                                         

touch bar.txt:
	command:   touch bar.txt
	directory: /tmp/example/touch_bar.txt
	stdout:    /tmp/example/touch_bar.txt/log.out
	stderr:    /tmp/example/touch_bar.txt/log.err
'touch bar.txt' done.

touch foo.txt:
	command:   touch foo.txt
	directory: /tmp/example/touch_foo.txt
	stdout:   /tmp/example/touch_foo.txt/log.out
	stderr:    /tmp/example/touch_foo.txt/log.err
'touch foo.txt' done.

foo/tests/test_foo.yml .                                                                                                        [100%] Keeping temporary directories and logs. Use '--kwd' or '--keep-workflow-wd' to disable this behaviour.


=================================================== 1 passed, 1 deselected in 0.16s ===================================================
$ tree /tmp/example
/tmp/example
├── touch_bar.txt
│   ├── bar
│   │   └── tests
│   │       └── test_bar.yml
│   ├── bar.txt
│   ├── foo
│   │   └── tests
│   │       └── test_foo.yml
│   ├── log.err
│   └── log.out
└── touch_foo.txt
    ├── bar
    │   └── tests
    │       └── test_bar.yml
    ├── foo
    │   └── tests
    │       └── test_foo.yml
    ├── foo.txt
    ├── log.err
    └── log.out

10 directories, 10 files

As expected, only one of the tests was selected as it matched the keyword foo. But despite only selecting one test, both tests executed as can seen by the presence of both foo.txt and bar.txt. The execution of both tests was unexpected.

The behavior is slightly different when passing pytest a directory of tests to run:

$ cat bar/tests/test_bar.yml 
- name: touch bar.txt
  command: touch bar.txt
$ cat foo/tests/test_foo.yml 
- name: touch foo.txt
  command: touch foo.txt
$ rm -rf ../test-temp/
$ pytest --basetemp /tmp/example --kwd foo/tests/
========================================================= test session starts =========================================================
platform linux -- Python 3.11.0, pytest-7.2.0, pluggy-1.0.0
rootdir: /Users/username/projects/pytest_workflow01
plugins: workflow-1.7.0.dev0
collecting ... 
collected 2 items                                                                                                                     

touch bar.txt:
	command:   touch bar.txt
	directory: /tmp/example/touch_bar.txt
	stdout:    /tmp/example/touch_bar.txt/log.out
	stderr:    /tmp/example/touch_bar.txt/log.err
'touch bar.txt' done.

touch foo.txt:
	command:   touch foo.txt
	directory: /tmp/example/touch_foo.txt
	stdout:    /tmp/example/touch_foo.txt/log.out
	stderr:    /tmp/example/touch_foo.txt/log.err
'touch foo.txt' done.

bar/tests/test_bar.yml .                                                                                                        [ 50%]
foo/tests/test_foo.yml .                                                                                                        [100%] Keeping temporary directories and logs. Use '--kwd' or '--keep-workflow-wd' to disable this behaviour.


========================================================== 2 passed in 0.26s ==========================================================
$ tree ../test-temp/
/tmp/example
├── touch_bar.txt
│   ├── bar
│   │   └── tests
│   │       └── test_bar.yml
│   ├── bar.txt
│   ├── foo
│   │   └── tests
│   │       └── test_foo.yml
│   ├── log.err
│   └── log.out
└── touch_foo.txt
    ├── bar
    │   └── tests
    │       └── test_bar.yml
    ├── foo
    │   └── tests
    │       └── test_foo.yml
    ├── foo.txt
    ├── log.err
    └── log.out

10 directories, 10 files

This time both tests are collected (and assumed to be selected?). This time the pytest accounting of tests passed does indicate both test have run, which is in agreement with the output file generated. But this is once again not what I expected, as I thought only the tests under foo/tests would run.

I am aware that I can use tags to get similar functionality. However, this requires having tags already configured within the yaml test definitions, where as using -k keyword or directory parameters for test selection could be done without setting up tags in the yaml test definitions.

@rhpvorderman
Copy link
Member

rhpvorderman commented Dec 2, 2022

I am aware that I can use tags to get similar functionality. However, this requires having tags already configured within the yaml test definitions

This is not entirely true. Test names are also tags, but I get your point.

The pytest docs list multiple ways of selecting a subset of tests to run. [...]

I completely agree, and actually using the -k and -m flags was the preferred implementation. However 4 years ago when I wrote pytest-workflow I decided to go for --tags instead. There are multiple reasons for this.

  • Running the tests is trivial compared to running the workflows. Yet pytest assumes all the code is executed in the test. Running a workflow for each test is a massive waste of resources. I could have fixed this using fixtures, but it was not clear to me how to implement this using the pytest 4 API. (It might be a lot easier in the pytest 7 API, as that is much better, but it didn't exist when I wrote it). So pytest-workflow runs the workflows first utilizing the code that is pytest-workflow specific and then runs the test utilizing the pytest-api. One advantage of this setup is that the workflow running can occur in parallel without requiring extra plugins.
  • Since the workflow running code does not utilize the pytest API, this introduces a problem. How do I disable workflows when all the tests that spawn from them are disabled. In that case I need to evaluate the -k parameter. The -k and -m parameters are not simple strings. They are expressions. Expressions that are evaluated on test names. In the pytest-api version 4 it was not straightforward how to evaluate these expressions and apply them to workflow tests. In your example it is quite straightforward, but when expressions are used it seems much more difficult. This is why the system was sidestepped and instead tags where introduced. These are strings, and only work with direct matching. This is simple, straightforward and very easy to implement without bugs.
  • Bug-free code is the number one priority of the pytest-workflow project. When there are bugs in the test framework, debugging becomes a double burden. Is this a bug in my workflow or in pytest-workflow? I think it has succeeded in that goal where it has run thousands and thousands of tests reliably. This does however limit how far we can go to interact with pytest. As I said, in the pytest 4 API it was not easy to get several things working. There was quite some reliance on so-called 'underscore methods' in the beginning of pytest-workflow, which are of course private and not guaranteed to be stable. Over its maturation period these underscore methods where removed as much as possible. They were only used for typing purposes, and are now finally (since pytest 7) completely removed.

So in short, this problem could be fixed by running workflows as fixtures. But this raises several problems. How to implement this in a way that:

  1. matches pytest-workflows standards of quality?
  2. still allows workflows to be executed in parallel?
  3. retains backwards-compatibility with --tags? This project has been in use for almost 4 years, so this feature can simply not be thrown out.

I would have love to have implemented it using -k and -m four years ago, but I wasn't as experienced as I am now and I do not know if the tooling back then allowed it in the first place. I quick look on pytest-xdist shows that session-scoped fixtures will still be run in parallel unless filelocks are enabled in the fixtures themselves. So the alternative implementation not only introduces a dependency on pytest-xdist (which may lead to troubles down the read if xdist changes the API) but also requires a reliance on filelocks which is highly platform dependent.

This is a very tough nut to crack and I am not sure if it is crackable. Sorry for the wall of text, but this is one of the design decisions that needs some context.

@wholtz
Copy link
Contributor Author

wholtz commented Dec 2, 2022

Hi @rhpvorderman - thank you for the rapid reply, the detailed answer, and of course for making and maintaining pytest-workflow - its a great tool!

For my use case, I don't actually need -k or -m functionality. I only need pytest test_dirs to select the the tests in test_dirs. Despite what I show above in my second example, passing directories to pytest does work to select only the tests the passed directory. I edited the output in my example because I'm actually running pytest in a container. Due to an error in a script, the pytest command I was actually running was:

pytest "" --basetemp /tmp/example --kwd foo/tests/

instead of what I claimed to be running:

pytest --basetemp /tmp/example --kwd foo/tests/

For some reason, adding an empty parameter ("") appears to make pytest run all tests. I'm not sure why that is, but I removed the empty parameter and now I'm getting the behavior I need. Sorry for that confusion.

Would you be open to a documentation PR where I add that -k and -m are not supported? And perhaps link to your above explanation of the reasoning?

@rhpvorderman
Copy link
Member

Would you be open to a documentation PR where I add that -k and -m are not supported? And perhaps link to your above explanation of the reasoning?

That would be great! Thanks!

@wholtz
Copy link
Contributor Author

wholtz commented Dec 6, 2022

Submitted PR #156

@wholtz wholtz closed this as completed Dec 6, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants