Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable per test output correlation for pytest #22594

Open
eleanorjboyd opened this issue Dec 6, 2023 · 3 comments
Open

Enable per test output correlation for pytest #22594

eleanorjboyd opened this issue Dec 6, 2023 · 3 comments
Assignees
Labels
area-testing feature-request Request for new features or functionality needs PR Ready to be worked on

Comments

@eleanorjboyd
Copy link
Member

eleanorjboyd commented Dec 6, 2023

A request was made in an earlier issue for this type of feature add, before I thought it was not possible given the configuration but following help from pytest contributors it should work. What will now happen is, based on pytest arguments the user controls the output (including stdout, stderr and logging) will be captured and returned in the error popup box inline and the testResult panel for each test. Error popup is only applicable for tests which fail, the panel will display output from either both types. All output will now be correlated with each test unless configured otherwise by the user.

Steps for Implementation:

  1. inside def pytest_report_teststatus(report, config):
  2. you can access report.capstdout, report.caplog and report.capstderr
  3. capturing possible with the default value for capturing which is opposite from how the extension previously handled this parameter before the rewrite. The reason for that design decision is no longer a problem so we can once again switch to default for capturing per pytest docs.
  4. all captured out, logs, and err can be attached to the message attribute of the Test Outcome object which is converted to json and sent to the extension
  5. this message attribute should be designed to allow for distinction between the different types of output since the message attribute is of type string this will be done on the python side
  6. the message attribute will be read by the extension, it is then displayed dependent on test outcome
  7. if the test FAILED, the message is displayed in the error popup box inline, and in the testResult panel when you click on each test (see images below)
  8. if the test SUCCEEDS, currently we do not display the message anywhere. This will need to change. We need to add a check to see if the message isn't empty and if it isn't then present that message to the user. My plan is this message will display in the testResult panel similar to the failed tests.

User Configurability

Importantly in this design is the ability for user control of the output. There are two main pytest parameters that give the user control that our extension must respect:
-s or --capture=no: When you use this option, pytest does not capture the output of the tests and instead prints to console (default value is capture is on)
-rA or --show-capture=all : This parameter has many "levels" where the user can chose passed tests, failed tests, skipped tests etc to have their capture shown to the user.

-s

  • this will automatically be respected given the design described above. If the user enables the -s flag, pytest will not capture any output and so our message attribute will have a stack trace in the case of failure and any logging. the -s flag does not impact logging.

-rA

  • pytest also respects this and handles the given level selected by the flag. Therefore the message will respect this setting level. The only additional consideration is that if a message value is empty (which might happen if the test passes and the rA flag is off) we do not want to display a blank message in the testResult panel and therefore a check needs to be in place to confirm if it is empty.

Test Case Examples

def test_logging2(caplog):
    logger = logging.getLogger(__name__)
    caplog.set_level(logging.DEBUG)  # Set minimum log level to capture

    logger.debug("This is a debug message.")
    logger.info("This is an info message.")
    logger.warning("This is a warning message.")
    logger.error("This is an error message.")
    logger.critical("This is a critical message.")

    assert False # also try with True
def test_logging2():
    print("This is a stdout message.")
    print("This is a stderr message.", file=sys.stderr)

    assert False # also try with True

as well as the arguments -s and -rA tried in different combinations

Expected Behavior Based on Arguments

when a test fails, the value of message will be:

default -s
default includes everything (logs,err,out) logging in msg, prints to console
-rA includes everything (no impact on failed tests) logging in msg, prints to console

when a test passes, the value of message will be:

default -s
default includes nothing logging in msg
-rA includes everything (logs,err,out) logging in msg, prints to console

Visuals

test failure popup:
Screenshot 2023-12-05 at 4 09 01 PM

testResult panel:
Screenshot 2023-12-05 at 4 09 10 PM

@eleanorjboyd eleanorjboyd added feature-request Request for new features or functionality area-testing needs PR Ready to be worked on labels Dec 6, 2023
@eleanorjboyd eleanorjboyd self-assigned this Dec 6, 2023
@jeffwright13
Copy link

Note: if you want to present Rerun results (available if you have the popular pytest-rerunfailures plugin installed), you will need to enable it by specifying -rAR instead of rA.

@Bjoernolav
Copy link

Bjoernolav commented Dec 7, 2023

This feature is highly appreciated, would make it much easier to investigate complex test failures when running multiple tests.

@JorgeRuizITCL
Copy link

This would be pretty helpful as you can use the test runner to debug wip implementations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area-testing feature-request Request for new features or functionality needs PR Ready to be worked on
Projects
None yet
Development

No branches or pull requests

4 participants