Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More test markers and fixtures (and move to pytest?) #23197

Open
oscargus opened this issue Mar 2, 2022 · 8 comments
Open

More test markers and fixtures (and move to pytest?) #23197

oscargus opened this issue Mar 2, 2022 · 8 comments
Labels
Testing Related to the test runner. Do not use for test failures unless it relates to the test runner itself

Comments

@oscargus
Copy link
Contributor

oscargus commented Mar 2, 2022

As discussed here, it would be useful to be modify the test structure a bit.

There are some different possibilities, so I expect an initial discussion here before it is possible to take actions.

  • Add a too_slow-marker for tests that are simply too slow to be executed as part of the CI, typically tests that are now marked as @SKIP("slow"), @SKIP("Too slow for @slow"), and skip("Too slow for travis.") should use this instead.
  • Add a hanging-marker for tests that are currently hanging (or just are really slow, who knows?).

There is a possibility to add test fixtures as an option to testing imports of libraries in the code. By creating a test fixture called np, one can simply add np as an argument to the test and then the test will be skipped if numpy is not available (of course the fixture np should check and behave correctly). This can be useful for all the current tests possibly skipping as part of not finding a library. Possibly one can do that for e.g. C compilers as well.

Someone can probably fill in the details about our testing infrastructure. It seems like we are not using pytest directly, although it is possible to run the tests using pytest. Maybe we should try to fully migrate to only using pytest?

@oscargus oscargus added the Testing Related to the test runner. Do not use for test failures unless it relates to the test runner itself label Mar 2, 2022
@oscarbenjamin
Copy link
Contributor

At some point sympy was very wary about having external dependencies and so made its own version of pytest for running the test suite. Since the test suite is compatible with pytest it is still possible to use pytest. Some contributors (perhaps mostly just me) use pytest to run the test suite. However CI and the dev instructions use sympy's internal test runner instead.

There are a few advantages in sympy having its own test runner such as the ability to customise the runner in ways that would be more difficult if using pytest. For example pytest's default doctest support is not that customisable and sympy was able to make an improved doctest runner. Also pytest lacks some basic features like timeouts and the ability to skip slow tests by default.

There is a clear disadvantage in sympy having its own test runner which is the need to maintain a whole bunch of code that clearly exists and is better maintained elsewhere. For example sympy's doctest runner was forked by astropy and made into a pytest plugin:
https://github.com/astropy/pytest-doctestplus

Likewise if the features sympy wants are things like timeouts then it would be better to add those to pytest or to pytest plugins so that other libraries can make use of those and a wider community can share the maintenance burden.

In any case the current situation is that we need to make sure that it is possible to run the tests under both sympy's and pytest's test runners. So if we want something that is a feature of pytest then the feature has to be implemented in sympy's test runner as well. Conversely if we want to add something that requires actually changing the test suite then that needs to remain compatible with pytest as well.

Personally I think that we should just ditch sympy's test runner and use pytest just because there's plenty enough real code to maintain in sympy without needlessly reinventing basic things.

The main features of pytest that I use that sympy's test runner does not have are:

  1. Running tests in parallel: e.g. install pytest-xdist and then use -n8 to run tests over 8 cores.
  2. Rerun the last failed tests with --lf. This is extremely useful with sympy's test suite.

I don't plan to implement either of those features in sympy's test runner because they don't need to be implemented there when you can just use pytest. There are other features in pytest though that would require changes in the test suite and so can't be used as long as sympy's own test runner is supported unless the features are added to the runner e.g. #15497.

@wermos
Copy link
Contributor

wermos commented Mar 12, 2022

If we use pytest for testing, then CI times will experience a significant speedup because GitHub Actions allows you to use 2 cores on Windows and Linux, so we would be able to parallelly run tests.

It appears that Travis also allows you to use 2 cores.

@oscarbenjamin
Copy link
Contributor

I did previously try using pytest with 2 cores on Travis but did not see the expected speed up (#16149).

Feel free to give it a try with GitHub Actions though. Note that anyone can test this in their own fork just by pushing a change to the runtests.yml workflow.

Possibly it can be faster if using pytest --assert=plain which might be worth doing anyway. Sometimes the pytest assert stuff is really useful but it can also be quite confusing with sympy expressions.

@ThePauliPrinciple
Copy link
Contributor

ThePauliPrinciple commented Mar 13, 2022

Locally I have not yet gotten pytest to work with multiple cores, it "hangs" (giving no useful feedback) at about 97%, longest I've had it run was an hour after it stopped reporting anything.

@oscarbenjamin
Copy link
Contributor

it "hangs"

Are you running it on the whole codebase? Maybe try something smaller first like sympy/core.

Note that an important difference between pytest and bin/test is that pytest does not skip slow tests by default. You need to run it like pytest -m "not slow" to skip the slow tests.

Also note that the OP motivation for this issue is that there are some slow tests that basically never finish and those are currently skipped in CI but not when you run the tests locally (either with bin/test or with pytest). With better markers we could make sure those are also skipped when running the tests locally under either test runner.

@ThePauliPrinciple
Copy link
Contributor

ThePauliPrinciple commented Mar 13, 2022

bin/test sympy works fine for me without any further flags. (although taking a while due to being single core). The "not slow" option seems to fix pytest for me.

Is it at all possible to make pytest ignore the slow tests by default for sympy? Otherwise I would be against making pytest the default.

EDIT: this is a game-changer for me btw, running the tests with multicore makes it worth it to just run the tests, rather than rely on the CI because otherwise I wouldn't be able to continue with sympy for an hour every time I want to commit.

@oscarbenjamin
Copy link
Contributor

Is it at all possible to make pytest ignore the slow tests by default for sympy?

Possibly there is a way to do this. Currently this is something that is special-cased in bin/test. Personally I think that a lot of the slow tests should just be part of a separate test suite or something. Certainly the tests that take forever should just be skipped under all situations (again see the OP).

this is a game-changer for me btw, running the tests with multicore makes it worth it to just run the tests

When you actually get to use it in a real situation you'll see that --lf is a real game changer.

@oscarbenjamin
Copy link
Contributor

Possibly there is a way to do this.

There are some suggestions here:
https://stackoverflow.com/questions/33084190/default-skip-test-unless-command-line-parameter-present-in-py-test

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Testing Related to the test runner. Do not use for test failures unless it relates to the test runner itself
Projects
None yet
Development

No branches or pull requests

4 participants