Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Warning about assert None from XFAIL tests #4639

Closed
oscarbenjamin opened this issue Jan 12, 2019 · 30 comments · Fixed by #6234
Closed

Warning about assert None from XFAIL tests #4639

oscarbenjamin opened this issue Jan 12, 2019 · 30 comments · Fixed by #6234
Labels

Comments

@oscarbenjamin
Copy link
Contributor

@oscarbenjamin oscarbenjamin commented Jan 12, 2019

Running pytest master I see warnings from XFAIL tests about asserting None. The warnings suggests to change the assert to assert obj is None but the reason we are asserting None is because the test fails (hence XFAIL).

I don't think these warnings should be emitted from XFAIL tests.

Using pytest master:

import pytest

@pytest.mark.XFAIL
def test_f():
    assert g()

def g():
    # Shouldn't return None but doesn't work properly...
    return None

Running the tests gives:

$ pytest test_f.py 
====================================================== test session starts =======================================================
platform darwin -- Python 2.7.10, pytest-4.1.1, py-1.7.0, pluggy-0.8.0
rootdir: /Users/enojb/current/sympy/pytest, inifile: tox.ini
plugins: xdist-1.26.0, forked-0.2, doctestplus-0.3.0.dev0
collected 1 item                                                                                                                 

test_f.py F                                                                                                                [100%]

============================================================ FAILURES ============================================================
_____________________________________________________________ test_f _____________________________________________________________

    @pytest.mark.XFAIL
    def test_f():
>       assert g()
E       PytestWarning: asserting the value None, please use "assert is None"

test_f.py:5: PytestWarning
==================================================== short test summary info =====================================================
FAIL test_f.py::test_f
==================================================== 1 failed in 0.08 seconds ====================================================
@blueyed

This comment has been minimized.

Copy link
Contributor

@blueyed blueyed commented Jan 12, 2019

What would you assert if it worked? Also just assert g()?
Have you considered using assert g() is not None then?

@RonnyPfannschmidt

This comment has been minimized.

Copy link
Member

@RonnyPfannschmidt RonnyPfannschmidt commented Jan 12, 2019

Xfail should be lowercase we need strict marks as default

@oscarbenjamin

This comment has been minimized.

Copy link
Contributor Author

@oscarbenjamin oscarbenjamin commented Jan 12, 2019

I'm seeing a few warnings haven't worked through them yet but here's one: https://github.com/sympy/sympy/blob/sympy-1.3/sympy/assumptions/tests/test_matrices.py#L41

It's not clear to me that that test should be asserting is not None. It possibly could be changed to is True but I'm not sure.

@asottile

This comment has been minimized.

Copy link
Member

@asottile asottile commented Jan 13, 2019

I believe the mistake here is pytest.mark.XFAIL vs pytest.mark.xfail

trying in a little script this works as expected, even with assert None:

pytest.mark.xfail

$ cat t.py
import pytest

@pytest.mark.xfail
def test():
    assert None
$ pytest t.py 
============================= test session starts ==============================
platform linux -- Python 3.6.7, pytest-4.1.1, py-1.7.0, pluggy-0.8.1
rootdir: /tmp/t, inifile:
collected 1 item                                                               

t.py x                                                                   [100%]

=============================== warnings summary ===============================
t.py::test
  /tmp/t/t.py:5: PytestWarning: asserting the value None, please use "assert is None"
    assert None

-- Docs: https://docs.pytest.org/en/latest/warnings.html
==================== 1 xfailed, 1 warnings in 0.07 seconds =====================
$ pytest -Werror t.py 
============================= test session starts ==============================
platform linux -- Python 3.6.7, pytest-4.1.1, py-1.7.0, pluggy-0.8.1
rootdir: /tmp/t, inifile:
collected 1 item                                                               

t.py x                                                                   [100%]

========================== 1 xfailed in 0.03 seconds ===========================

pytest.mark.XFAIL

switching to the incorrect name:

$ cat t.py
import pytest

@pytest.mark.XFAIL
def test():
    assert None
$ pytest t.py
============================= test session starts ==============================
platform linux -- Python 3.6.7, pytest-4.1.1, py-1.7.0, pluggy-0.8.1
rootdir: /tmp/t, inifile:
collected 1 item                                                               

t.py F                                                                   [100%]

=================================== FAILURES ===================================
_____________________________________ test _____________________________________

    @pytest.mark.XFAIL
    def test():
>       assert None
E       assert None

t.py:5: AssertionError
=============================== warnings summary ===============================
t.py::test
  /tmp/t/t.py:5: PytestWarning: asserting the value None, please use "assert is None"
    assert None

-- Docs: https://docs.pytest.org/en/latest/warnings.html
===================== 1 failed, 1 warnings in 0.05 seconds =====================
$ pytest -Werror t.py
============================= test session starts ==============================
platform linux -- Python 3.6.7, pytest-4.1.1, py-1.7.0, pluggy-0.8.1
rootdir: /tmp/t, inifile:
collected 1 item                                                               

t.py F                                                                   [100%]

=================================== FAILURES ===================================
_____________________________________ test _____________________________________

    @pytest.mark.XFAIL
    def test():
>       assert None
E       _pytest.warning_types.PytestWarning: asserting the value None, please use "assert is None"

t.py:5: PytestWarning
=========================== 1 failed in 0.04 seconds ===========================
@asottile

This comment has been minimized.

Copy link
Member

@asottile asottile commented Jan 13, 2019

unrelated, I'd suggest using pytest instead of the deprecated py.test here

@Zac-HD Zac-HD added the topic: marks label Jan 14, 2019
@oscarbenjamin

This comment has been minimized.

Copy link
Contributor Author

@oscarbenjamin oscarbenjamin commented Jan 20, 2019

Maybe I'm misunderstanding something here. For my example above test_f.py at the top of this issue I changed it from XFAIL to xfail and the problem does indeed go away but only if I'm running in the pytest root directory:

$ . venv/bin/activate
$ pip install -e pytest/
Obtaining file:///Users/enojb/current/sympy/pytest
  Installing build dependencies ... done
Requirement already satisfied: py>=1.5.0 in ./venv/lib/python3.7/site-packages (from pytest==4.1.2.dev19+g57bf9d67) (1.7.0)
Requirement already satisfied: six>=1.10.0 in ./venv/lib/python3.7/site-packages (from pytest==4.1.2.dev19+g57bf9d67) (1.11.0)
Requirement already satisfied: setuptools in ./venv/lib/python3.7/site-packages (from pytest==4.1.2.dev19+g57bf9d67) (39.0.1)
Requirement already satisfied: attrs>=17.4.0 in ./venv/lib/python3.7/site-packages (from pytest==4.1.2.dev19+g57bf9d67) (18.2.0)
Requirement already satisfied: more-itertools>=4.0.0 in ./venv/lib/python3.7/site-packages (from pytest==4.1.2.dev19+g57bf9d67) (4.3.0)
Requirement already satisfied: atomicwrites>=1.0 in ./venv/lib/python3.7/site-packages (from pytest==4.1.2.dev19+g57bf9d67) (1.2.1)
Requirement already satisfied: pluggy>=0.7 in ./venv/lib/python3.7/site-packages (from pytest==4.1.2.dev19+g57bf9d67) (0.8.0)
Installing collected packages: pytest
  Running setup.py develop for pytest
Successfully installed pytest
$ cd pytest
$ cat test_f.py 
import pytest

@pytest.mark.xfail
def test_f():
    assert g()

def g():
    # Shouldn't return None but doesn't work properly...
    return None
(venv) $ pytest test_f.py 
====================================================== test session starts =======================================================
platform darwin -- Python 3.7.1, pytest-4.1.2.dev19+g57bf9d67, py-1.7.0, pluggy-0.8.0
rootdir: /Users/enojb/current/sympy/pytest, inifile: tox.ini
plugins: xdist-1.26.0, forked-0.2, doctestplus-0.3.0.dev0
collected 1 item                                                                                                                 

test_f.py x                                                                                                                [100%]
==================================================== short test summary info =====================================================
XFAIL test_f.py::test_f

=================================================== 1 xfailed in 0.09 seconds ====================================================
$ cp test_f.py ..
$ cd ..
$ pytest test_f.py 
====================================================== test session starts =======================================================
platform darwin -- Python 3.7.1, pytest-4.1.2.dev19+g57bf9d67, py-1.7.0, pluggy-0.8.0
rootdir: /Users/enojb/current/sympy, inifile:
plugins: xdist-1.26.0, forked-0.2, doctestplus-0.3.0.dev0
collected 1 item                                                                                                                 

test_f.py x                                                                                                                [100%]

======================================================== warnings summary ========================================================
test_f.py::test_f
  /Users/enojb/current/sympy/test_f.py:5: PytestWarning: asserting the value None, please use "assert is None"
    assert g()

-- Docs: https://docs.pytest.org/en/latest/warnings.html
============================================= 1 xfailed, 1 warnings in 0.14 seconds ==============================================
$ cat test_f.py 
import pytest

@pytest.mark.xfail
def test_f():
    assert g()

def g():
    # Shouldn't return None but doesn't work properly...
    return None
@asottile

This comment has been minimized.

Copy link
Member

@asottile asottile commented Jan 20, 2019

that output looks correct to me, are you expecting xfail to silence the warning?

The reason it "appears" to silence the warning when run from pytest is we configure warnings-as-errors: here -- in reality it's upgrading the warning to an exception (which is gobbled by xfail)

@oscarbenjamin

This comment has been minimized.

Copy link
Contributor Author

@oscarbenjamin oscarbenjamin commented Jan 20, 2019

Okay I've reread everything above and I understand now.

The actual code I'm testing uses py.test.mark.xfail anyway. I can change that to pytest.mark.xfail but either way I see this problem.

After initially reading the comments above I was hoping that changing to xfail would remove these warnings because I don't think it makes sense to warn about this in a test that is already marked as xfail. I'll probably just fix this in Sympy though by changing the tests.

@asottile

This comment has been minimized.

Copy link
Member

@asottile asottile commented Jan 20, 2019

maybe the feature request is "xfail silences warnings" -- I think that could be considered but there's probably a lot of cases where you want the warnings even when you expect it to fail? I haven't put much thought to it.

@oscarbenjamin

This comment has been minimized.

Copy link
Contributor Author

@oscarbenjamin oscarbenjamin commented Jan 20, 2019

I think xfail silences warnings could be a feature request but that's not what I would propose.

I've read #3191 and #4146 and I don't understand the rationale for this change. A user who is confused and writes assert mock.assert... will immediately see a test failure because they are asserting None. I can see that it might be worthwhile to improve the test report for assert None when a test fails.

Adding this as a warning means that it will also show up for xfail tests (which is pointless).

@davidlowryduda

This comment has been minimized.

Copy link

@davidlowryduda davidlowryduda commented Feb 14, 2019

I believe that this is resolved in sympy #15817 and that this issue can now be closed. I therefore suggest that this issue be closed.

@oscarbenjamin

This comment has been minimized.

Copy link
Contributor Author

@oscarbenjamin oscarbenjamin commented Feb 14, 2019

I would say that the issue is worked around in sympy. I still think this should be fixed in pytest.

@davidlowryduda

This comment has been minimized.

Copy link

@davidlowryduda davidlowryduda commented Feb 15, 2019

Yes, now that you mention it, I agree. Thanks for your quick response.

@nicoddemus

This comment has been minimized.

Copy link
Member

@nicoddemus nicoddemus commented Feb 15, 2019

Just realized a simple workaround for this is to use:

@pytest.mark.xfail(run=False)

The test won't run at all so warnings won't be generated. This also means that the test will be faster; at work we always use run=False for that reason, after all if it fails nobody will notice anyway, it will just take longer to run. Of course, for the case where you have a flaky test, when using strict=True it doesn't make sense to use run=False.

Having said that, I don't think it is simple to implement this: the warning is emitted by the assertion rewriter, and at the time the warning is emitted it is not straightforward to see if the test being executed is marked as xfail (possible, just not simple). Another possibility would catch those warnings at the report stage, but that would also introduce more complexity than I would like.

Given that there's a trivial workaround (run=False), to actually implement seems to be quite complicated, and the use case is a bit uncommon (I think), I propose to close this as "won't fix".

What do you guys think?

@oscarbenjamin

This comment has been minimized.

Copy link
Contributor Author

@oscarbenjamin oscarbenjamin commented Feb 15, 2019

I would suggest to revert #4146. I don't see that it has any real purpose.

@nicoddemus

This comment has been minimized.

Copy link
Member

@nicoddemus nicoddemus commented Feb 15, 2019

I don't see that it has any real purpose.

Well the reasons are described in #3191.

@oscarbenjamin

This comment has been minimized.

Copy link
Contributor Author

@oscarbenjamin oscarbenjamin commented Feb 15, 2019

I don't see that it has any real purpose.

Well the reasons are described in #3191.

I don't see that those reasons justify the end result.

The stated problem is from stackoverflow. Note that the user with the original problem was already looking at an error message so emitting a warning (with Python's warning system) is not appropriate: the error message itself could be improved.

The purpose of warnings is to give useful hints in situations where you are not reporting an error. The only way that happens with #4146 is in an XFAIL test and in that case the warning is useless: I already know that the test fails.

@blueyed

This comment has been minimized.

Copy link
Contributor

@blueyed blueyed commented Apr 11, 2019

@oscarbenjamin

the error message itself could be improved.

Do you have a suggestion?
Should we use a more verbose error message with "assert None" instead, e.g.

>       assert os.remove.assert_has_calls(calls, any_order=True)
E       WARNING: asserting the value None, please use "assert is None"
E       AssertionError: assert None
E        +  where None = <bound method wrap_assert_has_calls of <MagicMock name='remove' id='65891856'>>([call('file_0.txt'), call('file_1.txt'), call('file_2.txt'), call('file_3.txt'), call('file_4.txt'), call('file_5.txt'), ...], any_order=True)
E        +    …
@nicoddemus

This comment has been minimized.

Copy link
Member

@nicoddemus nicoddemus commented Apr 11, 2019

Well the reasons are described in #3191.

I don't see that those reasons justify the end result.

The stated problem is from stackoverflow. Note that the user with the original problem was already looking at an error message so emitting a warning (with Python's warning system) is not appropriate: the error message itself could be improved.

The purpose of warnings is to give useful hints in situations where you are not reporting an error. The only way that happens with #4146 is in an XFAIL test and in that case the warning is useless: I already know that the test fails.

Well #3191 was a feature request from @RonnyPfannschmidt, let's see if he has something else to add.

@blueyed

This comment has been minimized.

Copy link
Contributor

@blueyed blueyed commented May 3, 2019

It also warns about a failing assert re.match(…) - where you could append is not None at the end, but that should not really be necessary.

@tucked

This comment has been minimized.

Copy link

@tucked tucked commented Oct 22, 2019

FWIW, we are hitting this with a custom mark:

# conftest.py

import pytest

def pytest_addoption(parser):
    parser.addoption(
        '--test-version',
        action='store',
        default='1.2.3',
        help='version to test',
    )

def pytest_configure(config):
    config.addinivalue_line("markers", "cxfail: conditionally xfail")

def pytest_collection_modifyitems(config, items):
    for test in items:
        for mark in test.own_markers + test.parent.own_markers:
            if mark.name == 'cxfail':
                version = config.getoption('--test-version')
                versions = mark.kwargs.get('versions', [])
                if version in versions:
                    test.add_marker(
                        pytest.mark.xfail(
                            raises=mark.kwargs.get('raises'),
                            reason="version {0} not in {1}".format(
                                version,
                                versions,
                            ),
                        ),
                    )
# test_sscce.py

import pytest

def truthy():
    return None  # uh-oh

@pytest.mark.cxfail(versions=['1.2.3'], raises=AssertionError, reason='good')
def test_example():
    assert truthy()
$ pytest test_sscce.py
======================= test session starts =======================
platform linux -- Python 3.6.8, pytest-5.1.2, py-1.8.0, pluggy-0.12.0
rootdir: /tmp/tmp.fqcNkzQQSI
collected 1 item

test_sscce.py x                                             [100%]

======================== warnings summary =========================
test_sscce.py::test_example
  /tmp/tmp.fqcNkzQQSI/test_sscce.py:10: PytestAssertRewriteWarning: asserting the value None, please use "assert is None"
    assert truthy()

-- Docs: https://docs.pytest.org/en/latest/warnings.html
================= 1 xfailed, 1 warnings in 0.02s ==================

run=False for that reason, after all if it fails nobody will notice anyway, it will just take longer to run.

@nicoddemus What if it the reason for the xfail is fixed and the test starts passing? You wouldn't notice an xpass, so you wouldn't remove the xfail. Then, if the test starts failing again, your tests will have missed a regression!

I would sooner ignore/annotate this warning than not run the xfail...
IMO, the point of testing at all is to actually check your expectations.

@craigds

This comment has been minimized.

Copy link

@craigds craigds commented Nov 19, 2019

We just started getting this problem, and not just with xfail tests, but with failed tests too. The problem as I see it is that the warning's suggested fix completely inverts the assertion condition:

assert re.match(...)

gives

PytestAssertRewriteWarning: asserting the value None, please use "assert is None"

clearly the correct fix would be is not None, rather than is None.

This is very dangerous because if a developer were to absent-mindedly follow this advice, they would cause the test to pass and therefore cement broken behaviour.

(pytest 4.6.6)

@asottile

This comment has been minimized.

Copy link
Member

@asottile asottile commented Nov 19, 2019

@nicoddemus @RonnyPfannschmidt @blueyed I'm convinced we should remove this warning -- I've yet to see a helpful case and I've seen a few pretty unhelpful cases -- let me know what you think

@The-Compiler

This comment has been minimized.

Copy link
Member

@The-Compiler The-Compiler commented Nov 19, 2019

I agree it should be removed, because we can't distinguish a valid None value from "accidentally calling assert with a function which doesn't return at all".

Recently on Twitter I saw a similar case which is tricky but not caught by this warning:

def test_values():
    a = [1, 2, 3]
    b = [3, 2, 1, 4]
    assert a.sort() == b.sort()

The test silently passes, but it's not immediately visible why. The real solution, IMHO, is starting to run mypy with --check-untyped-defs over the testsuite - because mypy can known whether a function does return a value or doesn't. In this case, it shows:

test_sort.py:4: error: "sort" of "list" does not return a value

Unfortunately, it doesn't catch something like the original example mentioned by @RonnyPfannschmidt in #3191:

import unittest.mock

def test_mock():
    m = unittest.mock.Mock()
    m.meth(123)
    assert m.meth.assert_called_with(123)

But still, I think mypy is a much more suitable place for such a check than pytest is, because it does have the distinction between something like -> None and -> Optional[re.Match].

@blueyed

This comment has been minimized.

Copy link
Contributor

@blueyed blueyed commented Nov 20, 2019

@craigds

assert re.match(...)

gives

PytestAssertRewriteWarning: asserting the value None, please use "assert is None"

clearly the correct fix would be is not None, rather than is None.

See #4639 (comment) - it depends on if you expect a match or not really. In your case it would not give a warning if the test would pass (i.e. there was a match), would it?
So the message should/could be clarified in that case.

As for using mypy / type-checking: while it is good in general, often things are not typed yet (e.g. fixtures, if not specified explicitly).

Maybe we should a) have a setting to opt-in into this, and b) enable it only for known things, like the initial issue (#3191), e.g. based on specific classes/methods.
(It might also be possible to inspect the called/used function's code to see if there's any return statement in there?)

@asottile

This comment has been minimized.

Copy link
Member

@asottile asottile commented Nov 20, 2019

I don't see how the warning could be useful, it will only ever appear in an already failing test

@asottile

This comment has been minimized.

Copy link
Member

@asottile asottile commented Nov 20, 2019

especially because it's impossible to discern between a correct case (xfail / incremental step in TDD / etc.) and an incorrect case

@blueyed

This comment has been minimized.

Copy link
Contributor

@blueyed blueyed commented Nov 20, 2019

It makes a bit sense with

assert m.meth.assert_called_with(123)

So, assuming that we could tell "the used function has no return statement" a warning / hint makes sense to me.
If we cannot tell that from inspecting the code, we could e.g. whitelist unittest's Mock methods.
(but I am repeating myself)

I don't see how the warning could be useful, it will only ever appear in an already failing test

True. But it's about helping / hinting then.
I've suggested in #4639 (comment) to have an additional warning with the normal failure then.

@asottile

This comment has been minimized.

Copy link
Member

@asottile asottile commented Nov 20, 2019

I really think we should only produce warnings if we're 100% sure it's a bug in the user's code that would be preventing a test from adequately testing their code. The assert (tup, le) one is a great example, that's always going to be true and never what the programmer intended. Another good example is with pytest.raises(BaseException): -- this is a pattern that will hide programming errors (we don't yet have a warning class for this, but it's theoretical).

in the case with assert m.assert_called_once_with(..) I can't see how adding a warning plus the failure is any more helpful than the failure. seeing assert None, None = m.assert_called_once_with(...) expansion is already informative of where that None is coming from. The case that you really want to protect against is assert m.called_once_with(...) which is another assertion that always succeeds (but isn't really in the spirit of pytest to detect / alert on such things -- that's best done by a linter or type checker)

@nicoddemus

This comment has been minimized.

Copy link
Member

@nicoddemus nicoddemus commented Nov 20, 2019

After reading all the points, I agree. The warning is good-intended, but in the end is proving to be more of a nuisance than anything else.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
10 participants
You can’t perform that action at this time.