Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parametrised tests like in pytest #15497

Open
oscarbenjamin opened this issue Nov 15, 2018 · 4 comments
Open

Parametrised tests like in pytest #15497

oscarbenjamin opened this issue Nov 15, 2018 · 4 comments
Labels
Testing Related to the test runner. Do not use for test failures unless it relates to the test runner itself

Comments

@oscarbenjamin
Copy link
Contributor

It doesn't look as if sympy's test module has a way to do parametrised tests like pytest does. These could be very useful in e.g. the test_ode.py which has many repetitive but not always consistent tests.

With pytest it works like this:

# test_solve.py

from sympy import solve, Symbol
from pytest import mark
parametrize = mark.parametrize

x = Symbol('x')

eqns = {
    'linear': (x-1, [1]),
    'quadratic': (x**2 - x, [0, 1]),
    'cubic': (x**3, [1])
}

@parametrize('eqname', eqns.keys())
def test_solve(eqname):
    eq, sol = eqns[eqname]
    assert solve(eq) == sol

Then each case becomes a separate test:

$ pytest solve_test.py
========================================== test session starts ==========================================
platform linux -- Python 3.6.2, pytest-4.0.0, py-1.7.0, pluggy-0.8.0
rootdir: /space/enojb/current/sympy/sympy, inifile:
collected 3 items                                                                                       

solve_test.py ..F                                                                                 [100%]

=============================================== FAILURES ================================================
___________________________________________ test_solve[cubic] ___________________________________________

eqname = 'cubic'

    @parametrize('eqname', eqns.keys())
    def test_solve(eqname):
        eq, sol = eqns[eqname]
>       assert solve(eq) == sol
E       assert [0] == [1]
E         At index 0 diff: 0 != 1
E         Use -v to get the full diff

solve_test.py:16: AssertionError
================================== 1 failed, 2 passed in 0.48 seconds ===================================

You can the easily re-run only the failed test with

$ pytest solve_test.py -k 'solve[cubic]'
========================================== test session starts ==========================================
platform linux -- Python 3.6.2, pytest-4.0.0, py-1.7.0, pluggy-0.8.0
rootdir: /space/enojb/current/sympy/sympy, inifile:
collected 3 items / 2 deselected                                                                        

solve_test.py F                                                                                   [100%]

=============================================== FAILURES ================================================
___________________________________________ test_solve[cubic] ___________________________________________

eqname = 'cubic'

    @parametrize('eqname', eqns.keys())
    def test_solve(eqname):
        eq, sol = eqns[eqname]
>       assert solve(eq) == sol
E       assert [0] == [1]
E         At index 0 diff: 0 != 1
E         Use -v to get the full diff

solve_test.py:16: AssertionError
================================ 1 failed, 2 deselected in 0.46 seconds =================================

This looks like something that would be very useful in sympy but doesn't seem to be implemented. On the hand it seems like maybe the best way to get this would just be to use pytest.

@asmeurer
Copy link
Member

I've always just used a loop to parameterize a test.

@oscarbenjamin
Copy link
Contributor Author

With a pytest parametrised test it is possible to easily re-run the test that failed without waiting for the whole loop. Also the output from bin/test does not make it clear which item in the loop has failed.

If you look at what pytest has done above it has told me that the failure is test_solve[cubic] that has failed. I can re-run only that test. I can also use the --lf flag to re-run all the tests that failed and it will run much quicker than bin/test would.

With bin/test in the loop situation I would have to enter the debugger just to work out which test failed in the loop. There would be no way to easily re-run just that test without waiting for the previous items in the loop to run as well.

@asmeurer
Copy link
Member

My biggest concern with this sort of thing has always been over complicating the tests. Tests should be as simple as possible, so that you can be sure that they are correct. Parameterization is relatively simple, but things can get complicated fast.

I also never really liked pytest's parameterization syntax. I think it would be simpler to write @parameterize(eqname=eqns.keys()), or even use function annotations once we stop caring about Python 2 support.

@oscarbenjamin
Copy link
Contributor Author

I also dislike the syntax but that would be easy to solve: just make a helper on top of pytest's parametrize.

I've been spending a bit of time running tests from test_ode.py (which are slow) and it would be helpful to be able to rerun specific examples without rerunning all examples from one test_ function. Being able to specifically run only the things that failed on the last run can save significant time when you have to wait for the tests just to pick up a typo.

I think that test_ode could benefit from parametrisation since it is full of repetitive but not consistently written tests. Moving all equations out of test_ functions could lead to many benefits (I'll discuss in a different issue). The relevant point here is just that refactoring the tests would be cleaner and the result would be more usable with this feature from pytest.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Testing Related to the test runner. Do not use for test failures unless it relates to the test runner itself
Projects
None yet
Development

No branches or pull requests

2 participants