Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Explicit mechanism for supporting xfail and skip in [third-party] plugins #7327

Open
webknjaz opened this issue Jun 5, 2020 · 3 comments
Labels
status: help wanted developers would like help from experts on this topic type: docs documentation improvement, missing or needing clarification type: proposal proposal for a new feature, often to gather opinions or design the API around the new feature
Milestone

Comments

@webknjaz
Copy link
Member

webknjaz commented Jun 5, 2020

While working on pytest-dev/pytest-forked#34 I had to go through source code in the skipping plugin to figure out how to make pytest show xfailed tests for runs made by an external plugin. And here's my observations:

  1. It's undocumented how to integrate xfail with test reports from plugins.
  2. I learned that if you set outcome='xfailed' and wasxfail = 'reason', pytest will show normal xfailed output in the failures and summary sections but the progress section will show the failure (F letter instead of X/x).
  3. I discovered that I have to set outcome='skipped' (along with wasxfail = 'reason'!) to make it show consistent output but I need to set it only if I discover that the test has xfail marker. Wait, what? skipped? Really? But I really ran it...
  4. I need to inspect all the marks and their conditionals. I need to figure out if the conditional expression of any mark is true or it raises the expected exception.
  5. Internal implementation details leak into the public API... (at least this is how I see the need to set certain magic combination of values and do manual condition matching)

So here's what I think could be improved:

  1. Docs. It should be explicitly documented how to make your plugin work with xfail.
  2. There could be some sort of public API to make the test result xfailed. (I don't really like this so see (3))
  3. Implement a better xfail processing mechanism on the pytest internals side.
    • Plugins should cleanly report failures and passes
    • pytest should convert the test reports to have proper attributes related to xfail after receiving results from plugins
    • There should be a mechanism to convey an exception that caused the failure in the test run from plugin to pytest so that it could also match it against the raises arg
@Zac-HD Zac-HD added status: help wanted developers would like help from experts on this topic type: docs documentation improvement, missing or needing clarification type: proposal proposal for a new feature, often to gather opinions or design the API around the new feature labels Jun 6, 2020
@RonnyPfannschmidt RonnyPfannschmidt added this to the 7.0 milestone Jun 6, 2020
@webknjaz webknjaz changed the title [RFC] Explicit mechanism for supporting xfail in [third-party] plugins [RFC] Explicit mechanism for supporting xfail and skip in [third-party] plugins Jul 20, 2020
@webknjaz
Copy link
Member Author

webknjaz commented Jul 20, 2020

The issue I faced today (pytest-dev/pytest-forked#44) made me rethink this a bit.

  1. I think that this mechanism should also include skip, not only xfail (and maybe something else too).
  2. Markers should have priorities (maybe based on the decoration order or something else).
  3. Built-in "native" markers should be prioritized over third-parties. So, for example, if there's a skipif that has a truthy condition, it'd be evaluated first and would exclude the test from the execution plan, not giving any chance to the plugins to start running it. This would also help plugins to be independent of any unknown markers, not having to implement support for checking all those skips and xfails...

@RonnyPfannschmidt
Copy link
Member

It is a problem that forked does move the complete evaluation into a subprocess

So the pytest parts that ought to handle marker trigger in the subprocess instead of the parent.

This flaw is part of why I'd like to see a forked alike that's based running a complete pytest session in the subprocess and not just forking inside the runtestprotocol hook

@webknjaz
Copy link
Member Author

Interesting... Though, this should be somehow documented too.

@bluetech bluetech modified the milestones: 8.0, 9.0 Jan 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status: help wanted developers would like help from experts on this topic type: docs documentation improvement, missing or needing clarification type: proposal proposal for a new feature, often to gather opinions or design the API around the new feature
Projects
None yet
Development

No branches or pull requests

4 participants