Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pytest's caplog fixture with multiprocessing (hashtag573) #130

Closed
vikramsubramanian opened this issue Aug 28, 2024 · 10 comments
Closed

Pytest's caplog fixture with multiprocessing (hashtag573) #130

vikramsubramanian opened this issue Aug 28, 2024 · 10 comments

Comments

@vikramsubramanian
Copy link

I am trying to use this with multiprocessing, but am not having any luck. Is there something I should be looking out for there?

_Originally posted by in

@vikramsubramanian
Copy link
Author

Hey I created a new ticket to ardress your issue separately.

Can you please give me an example using multiprocessing which doesn't work as expected? What is happening?

@vikramsubramanian
Copy link
Author

sure, thanks for making this issue, I should have done that myself 😅

hashtag Minimum reproduction

import multiprocessing as mp

import pytest
from _pytest.logging import caplog as _caplog
from loguru import logger



def caplog(_caplog):
    hashtag 
    handler_id = logger.add(_caplog.handler, format="{message}")
    yield _caplog
    logger.remove(handler_id)


def create_a_log():
    logger.critical("multiprocess log")



def log():
    logger.critical("setup log")

    def _log(name: str = None):
        client_process = mp.Process(
            name=name,
            daemon=True,
            target=create_a_log,
        )
        client_process.start()
        return client_process

    yield _log
    logger.critical("teardown log")


def test_log(log):
    logger.critical("test log")
    log_process = log()
    log_process.join()

    assert False, "fail on purpose to show captured logs, stdout, and stderr"

hashtag What I expect
To logs generated using Loguru to be captured by Pytest

hashtag What happens
None of the logs are captured:

collected 1 item                                                                                        

repro/test_issue.py F                                                                             [100%]

=============================================== FAILURES ================================================
_______________________________________________ test_log ________________________________________________

log = <function log.<locals>._log at 0x7f676c9035e0>

    def test_log(log):
        logger.critical("test log")
        log_process = log()
        log_process.join()
    
>       assert False, "fail on purpose to show captured logs, stdout, and stderr"
E       AssertionError: fail on purpose to show captured logs, stdout, and stderr
E       assert False

repro/test_issue.py:42: AssertionError
----------------------------------------- Captured stderr setup -----------------------------------------
2022-01-21 17:12:03.648 | CRITICAL | repro.test_issue:log:22 - setup log
----------------------------------------- Captured stderr call ------------------------------------------
2022-01-21 17:12:03.648 | CRITICAL | repro.test_issue:test_log:38 - test log
2022-01-21 17:12:03.650 | CRITICAL | repro.test_issue:create_a_log:17 - multiprocess log
--------------------------------------- Captured stderr teardown ----------------------------------------
2022-01-21 17:12:03.663 | CRITICAL | repro.test_issue:log:34 - teardown log
======================================== short test summary info ========================================
FAILED repro/test_issue.py::test_log - AssertionError: fail on purpose to show captured logs, stdout, ...
=========================================== 1 failed in 0.05s ===========================================

@vikramsubramanian
Copy link
Author

Hi, thanks for the reproducible example! 👍

I notice you're not using [the caplog fixture]( anywhere. Note that for this reason no logs are captured.

I refactored a bit your example as follow:

import multiprocessing as mp

import pytest
from _pytest.logging import caplog as _caplog
from loguru import logger



def caplog(_caplog):
    hashtag 
    handler_id = logger.add(_caplog.handler, format="{message}")
    yield _caplog
    logger.remove(handler_id)


def create_a_log():
    logger.critical("multiprocess log")


def log():
    logger.critical("setup log")

    client_process = mp.Process(
        name=None,
        daemon=True,
        target=create_a_log,
    )
    client_process.start()
    client_process.join()

    logger.critical("teardown log")


def test_log(caplog):
    logger.critical("test log")

    log()

    assert "setup log" in caplog.text
    assert "teardown log" in caplog.text
    assert "multiprocess log" in caplog.text

Notice all logs are captured except the one emitted in a different Process. This is actually expected because while using multiprocessing to spawn a child process, the logger is entirely copied. The child and parent process do not share the same logger object, thus logs emitted in child process are not captured by the parent one.

As a workaround, you have to specify enqueue=True while adding the caplog's handler to Loguru. It will ensure logs are propagated to the parent process instead of being handled in the child one.

def caplog(_caplog):
    hashtag 
    handler_id = logger.add(_caplog.handler, format="{message}", enqueue=True)
    yield _caplog
    logger.remove(handler_id)

See also this related part in the documentation: [Compatibility with multiprocessing using enqueue argument](

@vikramsubramanian
Copy link
Author

Okay, I think I see now where a lot of my confusion is coming from. This is ONLY for adding loguru logs to the caplog fixture, it does not add them to whatever pytest uses to help debug failed tests. Example below:

import logging
import multiprocessing as mp

import pytest
from _pytest.logging import caplog as _caplog
from loguru import logger



def caplog(_caplog):
    hashtag 
    handler_id = logger.add(_caplog.handler, format="{message}", enqueue=True)
    yield _caplog
    logger.remove(handler_id)


def create_a_log():
    logger.critical("loguru multiprocess log")
    logging.critical("logging multiprocess log")


def log():
    logger.critical("loguru setup log")
    logging.critical("logging setup log")

    client_process = mp.Process(
        name=None,
        daemon=True,
        target=create_a_log,
    )
    client_process.start()
    client_process.join()

    logger.critical("loguru teardown log")
    logging.critical("logging teardown log")


def test_log(caplog):
    logger.critical("loguru test log")
    logging.critical("logging test log")

    log()

    assert "setup log" in caplog.text
    assert "teardown log" in caplog.text
    assert "multiprocess log" in caplog.text
    assert False

![image](

I see that what you put results in all logs, including multiprocess logs, ending up in caplog, which is even better than what happens with logging (note the lack of the logging multiprocess log anywhere)!

Do you happen to know how to get the loguru logs to show up under the Captured log call section?

@vikramsubramanian
Copy link
Author

Indeed, caplog fixture and reported terminal logs are different. I dug a bit into pytest source code, and I found this: [_pytest/logging.pyhashtagL552](

This is the handler reporting logs on terminal, which needs to be added to Loguru's logger as well. Here is a possible implementation:

def reportlog(pytestconfig):
    logging_plugin = pytestconfig.pluginmanager.getplugin("logging-plugin")
    handler_id = logger.add(logging_plugin.report_handler, format="{message}", enqueue=True)
    yield
    logger.remove(handler_id)

I don't know if this can be considered part of the public pytest API, but it seems to work fine. Can you try adding this fixture to your test_log() function and let me know if it gives the output you were looking for?

@vikramsubramanian
Copy link
Author

That works! Thanks so much!

@vikramsubramanian
Copy link
Author

For posterity, all logs can be automatically reported using autouse=True on the fixture

@vikramsubramanian
Copy link
Author

Okay I am running into a different but related issue; The code I'm testing calls logger.remove(), and it's nuking my logging-plugin sink. Thoughts?

@vikramsubramanian
Copy link
Author

A way to "protect" a sink would be awesome :)

@vikramsubramanian
Copy link
Author

If you know the handler id returned by .add() in your tested code, you could use .remove(handler_id) which would keep the logging-plugin alive.

Otherwise I would just advice using [unittest.mock]( to re-add the logging-plugin as a side effect of calling logger.remove().

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant