-
-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
More test markers and fixtures (and move to pytest?) #23197
Comments
At some point sympy was very wary about having external dependencies and so made its own version of pytest for running the test suite. Since the test suite is compatible with pytest it is still possible to use pytest. Some contributors (perhaps mostly just me) use pytest to run the test suite. However CI and the dev instructions use sympy's internal test runner instead. There are a few advantages in sympy having its own test runner such as the ability to customise the runner in ways that would be more difficult if using pytest. For example pytest's default doctest support is not that customisable and sympy was able to make an improved doctest runner. Also pytest lacks some basic features like timeouts and the ability to skip slow tests by default. There is a clear disadvantage in sympy having its own test runner which is the need to maintain a whole bunch of code that clearly exists and is better maintained elsewhere. For example sympy's doctest runner was forked by astropy and made into a pytest plugin: Likewise if the features sympy wants are things like timeouts then it would be better to add those to pytest or to pytest plugins so that other libraries can make use of those and a wider community can share the maintenance burden. In any case the current situation is that we need to make sure that it is possible to run the tests under both sympy's and pytest's test runners. So if we want something that is a feature of pytest then the feature has to be implemented in sympy's test runner as well. Conversely if we want to add something that requires actually changing the test suite then that needs to remain compatible with pytest as well. Personally I think that we should just ditch sympy's test runner and use pytest just because there's plenty enough real code to maintain in sympy without needlessly reinventing basic things. The main features of pytest that I use that sympy's test runner does not have are:
I don't plan to implement either of those features in sympy's test runner because they don't need to be implemented there when you can just use pytest. There are other features in pytest though that would require changes in the test suite and so can't be used as long as sympy's own test runner is supported unless the features are added to the runner e.g. #15497. |
If we use It appears that Travis also allows you to use 2 cores. |
I did previously try using pytest with 2 cores on Travis but did not see the expected speed up (#16149). Feel free to give it a try with GitHub Actions though. Note that anyone can test this in their own fork just by pushing a change to the runtests.yml workflow. Possibly it can be faster if using |
Locally I have not yet gotten pytest to work with multiple cores, it "hangs" (giving no useful feedback) at about 97%, longest I've had it run was an hour after it stopped reporting anything. |
Are you running it on the whole codebase? Maybe try something smaller first like Note that an important difference between Also note that the OP motivation for this issue is that there are some slow tests that basically never finish and those are currently skipped in CI but not when you run the tests locally (either with |
Is it at all possible to make EDIT: this is a game-changer for me btw, running the tests with multicore makes it worth it to just run the tests, rather than rely on the CI because otherwise I wouldn't be able to continue with sympy for an hour every time I want to commit. |
Possibly there is a way to do this. Currently this is something that is special-cased in
When you actually get to use it in a real situation you'll see that |
There are some suggestions here: |
As discussed here, it would be useful to be modify the test structure a bit.
There are some different possibilities, so I expect an initial discussion here before it is possible to take actions.
@SKIP("slow")
,@SKIP("Too slow for @slow")
, andskip("Too slow for travis.")
should use this instead.There is a possibility to add test fixtures as an option to testing imports of libraries in the code. By creating a test fixture called np, one can simply add np as an argument to the test and then the test will be skipped if numpy is not available (of course the fixture np should check and behave correctly). This can be useful for all the current tests possibly skipping as part of not finding a library. Possibly one can do that for e.g. C compilers as well.
Someone can probably fill in the details about our testing infrastructure. It seems like we are not using pytest directly, although it is possible to run the tests using pytest. Maybe we should try to fully migrate to only using pytest?
The text was updated successfully, but these errors were encountered: