Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using capfd in fixture breaks output capturing #4428

Open
Nikratio opened this issue Nov 20, 2018 · 11 comments
Open

Using capfd in fixture breaks output capturing #4428

Nikratio opened this issue Nov 20, 2018 · 11 comments
Labels
plugin: capture related to the capture builtin plugin topic: fixtures anything involving fixtures directly or indirectly

Comments

@Nikratio
Copy link
Contributor

Consider the following code:

import sys
import subprocess
import pytest

#@pytest.yield_fixture(autouse=True)
#def does_nothing(request, capfd):
#   yield

@pytest.fixture(scope='function')
def emits_stdout():
    sub = subprocess.Popen([
        sys.executable,
        '-c', ';'.join((
            'import time',
            'print("hello world")',
            'time.sleep(0.5)',
            'print("hello whirled")',
        ))
    ])
    yield 'foo'
    sub.wait()

def test_something(emits_stdout):
    import time
    time.sleep(0.7)
    assert False

The output is as expected:

$ python3 -m pytest test_it.py 
==================================== test session starts =====================================
platform linux -- Python 3.5.3, pytest-3.8.0, py-1.6.0, pluggy-0.7.1
rootdir: /home/nikratio/tmp/tests, inifile:
plugins: trio-0.5.0
collected 1 item                                                                             

test_it.py F                                                                           [100%]

========================================== FAILURES ==========================================
_______________________________________ test_something _______________________________________

emits_stdout = 'foo'

    def test_something(emits_stdout):
        import time
        time.sleep(0.7)
>       assert False
E       assert False

test_it.py:27: AssertionError
------------------------------------ Captured stdout call ------------------------------------
hello world
hello whirled
================================== 1 failed in 0.73 seconds ==================================

However, if I uncomment the "noop" fixture, no output is captured anymore:

$ python3 -m pytest test_it.py 
==================================== test session starts =====================================
platform linux -- Python 3.5.3, pytest-3.8.0, py-1.6.0, pluggy-0.7.1
rootdir: /home/nikratio/tmp/tests, inifile:
plugins: trio-0.5.0
collected 1 item                                                                             

test_it.py F                                                                           [100%]

========================================== FAILURES ==========================================
_______________________________________ test_something _______________________________________

emits_stdout = 'foo'

    def test_something(emits_stdout):
        import time
        time.sleep(0.7)
>       assert False
E       assert False

test_it.py:27: AssertionError
================================== 1 failed in 0.75 seconds ==================================


@RonnyPfannschmidt
Copy link
Member

using the capfd fixture i not a noop, i beleive a integration point with it is problematic, but currently i dont recall the details

@Nikratio Nikratio changed the title no-op fixture breaks output capturing Using capfd in fixture breaks output capturing Nov 20, 2018
@Nikratio
Copy link
Contributor Author

You are right! Adjusted the bug title.

@RonnyPfannschmidt
Copy link
Member

@RonnyPfannschmidt
Copy link
Member

@nicoddemus i believe this is yet another indication that a completely different structure for capture is needed

@Nikratio
Copy link
Contributor Author

@RonnyPfannschmidt Could you elaborate on how this is expected behavior? I read the link several times now, but I don't understand how it explains what I am seeing.

@RonnyPfannschmidt
Copy link
Member

@Nikratio the fixture captures the actual output, so it can be used in the test, its not passed trough to the outer pytest

@Nikratio
Copy link
Contributor Author

Is there any workaround?

@RonnyPfannschmidt
Copy link
Member

for example the capture fixtures have a method contextmanager to disable them for a portion of code

@Nikratio
Copy link
Contributor Author

Hmm. Not sure I follow. I do not want to disable them, I want them to work..

@Zac-HD Zac-HD added plugin: capture related to the capture builtin plugin topic: fixtures anything involving fixtures directly or indirectly labels Nov 24, 2018
@xmo-odoo
Copy link

the fixture captures the actual output, so it can be used in the test, its not passed trough to the outer pytest

So capsys / capfd doesn't rely on (and is not compatible with) the "builtin" capturing of stdout/stderr?

That would explain my problem, I wasn't sure whether it was worth opening a separate issue so maybe it's not if this is a known issue: my tests are interacting with a subprocess which is created in a fixture and inherits the test runner's stdout/stderr (just a subprocess.Popen(args…), no kwargs).

The output of that subprocess is correctly captured by pytest itself (as part of capturing the overall stdout/stderr), however neither capsys nor capfd can see it. Which makes sense if they just swap out the fds/streams locally during the test, they're too late for the subprocess to be impacted.

Am I correct that capfd-ing the subprocess's fixture would work, but then the subprocess's output wouldn't be displayed by pypy anymore (on test failure)? And even if I tried to manually inject it e.g.

out, err = capfd.readouterr()
with capfd.disable():
    print(out)
    print(err, file=sys.stderr)

I would still be missing bits if the test used readouterr internally?

@xmo-odoo
Copy link

Also somewhat of a duplicate of #3448 ?

xmo-odoo added a commit to odoo/runbot that referenced this issue Jun 13, 2024
… valid

Seems like a good idea to better keep track of the log of an Odoo used
to testing, and avoid silently ignoring logged errors.

- intercept odoo's stderr via a pipe, that way we can still write it
  back out and pytest is able to read & buffer it, pytest's capfd
  would not work correctly: it breaks output capturing (and printing
  on failure); and because of the way it hooks in it's unable to
  capture from subprocesses inheriting the standard stream, cf
  pytest-dev/pytest#4428
- update the env fixture to check that the odoo log doesn't have any
  exception on failure
- make that check conditional on the `expect_log_errors` marker, this
  way we can mark tests for which we expect errors to be logged, and
  assert that that does happen
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
plugin: capture related to the capture builtin plugin topic: fixtures anything involving fixtures directly or indirectly
Projects
None yet
Development

No branches or pull requests

4 participants