New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing output in pytest magics #2

Closed
abingham opened this Issue Jan 3, 2017 · 9 comments

Comments

Projects
None yet
3 participants
@abingham
Copy link

abingham commented Jan 3, 2017

When I run a cell using the new magics, I don't see the complete output that I expect from pytest. On the command line pytest typically prints detailed information about an assertion, but all the magic prints is E AssertionError.

For example, given this function:

def test_sorted():
    assert sorted([4, 2, 1, 6]) == [1, 2, 3, 4]

the command line shows this:

sixtynorth@TK421% pytest -qq pytests.py 
F
=============================================== FAILURES ===============================================
_____________________________________________ test_sorted ______________________________________________

    def test_sorted():
>       assert sorted([4, 2, 1, 6]) == [1, 2, 3, 4]
E       assert [1, 2, 4, 6] == [1, 2, 3, 4]
E         At index 2 diff: 4 != 3
E         Full diff:
E         - [1, 2, 4, 6]
E         ?         ---
E         + [1, 2, 3, 4]
E         ?        +++

pytests.py:2: AssertionError

but the cell magic just shows:

F
=============================================== FAILURES ===============================================
_____________________________________________ test_sorted ______________________________________________

    def test_sorted():
>       assert sorted([4, 2, 1, 6]) == [1, 2, 3, 4]
E       AssertionError

<ipython-input-3-f12e7a50e0c6>:3: AssertionError

This makes it hard to demo one of pytest's strengths, it's ability to introspect assertions and pinpoint the problem.

I looked through the code a bit, but nothing struck me as an obvious culprit. Is there something I can do to make this work with flags to run_pytest? Or is there a code change you can make to fix this?

@chmp

This comment has been minimized.

Copy link
Owner

chmp commented Jan 3, 2017

Could you detail how you call run_pytest? I cannot reproduce your problem. For me it works as expected:

%%run_pytest
def test_sorted():
    assert sorted([4, 2, 1, 6]) == [1, 2, 3, 4]

output:

============================================================================== test session starts ==============================================================================
platform darwin -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: /Volumes/Home/Temp, inifile: 
plugins: catchlog-1.2.2
collected 1 items

Test.py F

=================================================================================== FAILURES ====================================================================================
__________________________________________________________________________________ test_sorted __________________________________________________________________________________

    def test_sorted():
>       assert sorted([4, 2, 1, 6]) == [1, 2, 3, 4]
E       assert [1, 2, 4, 6] == [1, 2, 3, 4]
E         At index 2 diff: 4 != 3
E         Use -v to get the full diff

<ipython-input-13-f12e7a50e0c6>:3: AssertionError
=========================================================================== 1 failed in 0.03 seconds ============================================================================

With -q, the output also contains the detailed assertion error, but strips the output of versions. If I execute from ipytest import run_pytest; run_pytest() directly inside the notebook it prints the exact same output as %%run_pytest.

The full notebook can be found here (I added .txt to trick github into uploading the notebook).

@chmp

This comment has been minimized.

Copy link
Owner

chmp commented Jan 3, 2017

Maybe the flag --assert=reinterp could help (found here)?

@abingham

This comment has been minimized.

Copy link
Author

abingham commented Jan 4, 2017

I'm totally baffled here. Whenever I use ipytest from a jupyter notebook, I don't see the expected diffs in the output. I've tried this with my own notebooks as well as with the examples provided with ipytest. I've also tried it on OS X and Windows.

When I run pytest from the command line I see the proper diffs, so it's not some problem with pytest.

Could you detail how you call run_pytest?

This problem appears even with the examples/Magics.ipynb shipped with ipytest. To try to isolate the problem, I cloned a fresh copy of ipytest, installed everything into a new virtualenv, and ran the jupyter notebook. Still, this is what I see:
CELL 1

# set the file name (required)
__file__ = 'Magics.ipynb'

# add ipython magics
import ipytest.magics

import pytest

CELL 2

%%run_pytest[clean]

def test_sorted():
    assert sorted([4, 2, 1, 6]) == [1, 2, 3, 4]
========================================= test session starts ==========================================
platform darwin -- Python 3.5.2, pytest-3.0.5, py-1.4.32, pluggy-0.4.0
rootdir: /Users/sixtynorth/projects/ipytest, inifile: 
collected 1 items

Magics.py F

=============================================== FAILURES ===============================================
_____________________________________________ test_sorted ______________________________________________

    def test_sorted():
>       assert sorted([4, 2, 1, 6]) == [1, 2, 3, 4]
E       AssertionError

<ipython-input-5-f12e7a50e0c6>:3: AssertionError
======================================= 1 failed in 0.02 seconds =======================================

As I mentioned, I've tried this across a few machines which don't share any state, so I don't think it's some sort of configuration options I've somehow set somewhere.

Is it possible that you've got some setting somewhere that's letting your notebooks show more output? I'm grasping at straws here, but this is pretty mysterious.

Also, have you tried a fresh, isolated installation somewhere to see if you can reproduce this behavior?

@chmp

This comment has been minimized.

Copy link
Owner

chmp commented Jan 4, 2017

Yes, I can reproduce your behavior. Apparently the pytest version is to blame. For pytest==2.9.2 everything works as expected, but for pytest==3.0.5 the output is broken.

I have to have a look whether this can be fixed and, if so, how.

@abingham

This comment has been minimized.

Copy link
Author

abingham commented Jan 4, 2017

OK, it's good that you've at least got a lead on it. Let me know if you need a hand looking into anything.

@DenisGorbachev

This comment has been minimized.

Copy link

DenisGorbachev commented Apr 23, 2018

Same issue with the following package versions:

$ conda list | grep pytest
ipytest                   0.2.1                     <pip>
pytest                    3.5.0                    py36_0    conda-forge

Is it possible to provide any additional info to help debug the issue?

@chmp

This comment has been minimized.

Copy link
Owner

chmp commented Apr 23, 2018

Thanks for reminding me of this issue. I pushed a new release as 0.2.2. It includes a new magic %%rewrite_asserts that will add support for pytest's pretty asserts. When using the %%run_pytest magic, the asserts are rewritten automatically.

Does this change fit your needs?

For completeness, you can use the magics as in:

## Option 1
# cell 1
import ipytest
import ipytest.magics
___file__ = 'Untitled.ipynb'

# cell 2, separate call to `run_ipytest`
%%rewrite_asserts

def test_foo():
    assert [1, 2] == [2, 3]

# cell 3
ipytest.run_pytest()
## Option 2
# cell 1
import ipytest.magics
___file__ = 'Untitled.ipynb'

# cell 2 run_pytest magic
%%run_pytest

def test_bar():
    assert [2, 3] == [3, 4]
@DenisGorbachev

This comment has been minimized.

Copy link

DenisGorbachev commented Apr 23, 2018

@chmp Yes, 0.2.2 works perfectly. Thanks a lot for fast response!

@chmp

This comment has been minimized.

Copy link
Owner

chmp commented Apr 23, 2018

Perfect. Happy to help :).

@chmp chmp closed this Apr 23, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment