-
-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using fixtures in the collection phase #7140
Comments
Fixtures currently strictly happen after collection There are some ideas to enable them for use in marker condition, but nothing concrete yet |
Another reason to have fixtures at collection time would be to do complex and context-dependent "skipif" conditions. I use pytest for integration testing and we have a lot of test environments which are used differently depending on the test case, so you want to be able to skip your testcase depending on that environment and on what the testcase decides to test (which is computed in fixtures). My idea was to do a first pass during collection phase and execute certain fixtures marked as "constraints" which would be side-effect free, use that to modify collection, then do the normal second pass during actual execution (possibly using cached values, but that might be a problem for memory usage). |
My opinion is that such a feature is unnecessary, and would only lead to overly complex test structures. It would also obfuscate the intent behind the fixture system (i.e. performing the steps and providing the resources needed for a test at test runtime), and blur the boundaries for when a test actually begins running, and what the uses are for fixtures. For this, I would recommend making a custom hook (e.g. Then you can have your plugin override the default You can then have your new
@iwanb I'm not sure I follow, but this sounds like something that could have a simpler solution. Are you saying that you have test cases that don't know what they're supposed to test until after they start running, and you don't know what it will test until after it decides what it will test? |
Fixtures are a dependency injection system, and enabling them at collection time makes various dynamic parameter details doable much more cleanly In particular if parameterization and tests practically share resources Requiring users to have 2 ways to obtain dependencies is a pain |
Yes the test environment is varied and complex and e.g. using simple tags does not scale. The rationale is that the fixtures are already used to gather data about the environment for the testcase, so it would be handy to use it to influence collection as well (without having to start fixtures which do stateful setups). It's not really the focus of pytest though so I can see why you wouldn't want to add such a feature, just thought I'd add an extra reason to have this. Making the fixture system less tied to test execution could also be an approach, then it could be implemented as a plugin. |
@RonnyPfannschmidt I think I understand what you're saying (but I'm not sure, so correct me if I misunderstood), but I think that's a good separation of concerns to have. Part 1 is gathering requirements/parameters, part 2 is defining and collecting tests around requirements/parameters, and part 3 is executing tests and the fixtures they depend on. Knowing what tests you need is different than getting the data to perform those tests, so they're 2 different types of dependencies. While I wouldn't agree with such an approach (because it means you wouldn't have a consistent set of tests defined and wouldn't know what tests you should have before running the tests, unless this is just data you already have locally), there's always the option to use code in the global scope to gather the data needed to pipe into the params field of your fixtures. That said, my stance on this is from the perspective of coopting the @iwanb manually tagging tests/scopes with marks may not scale particularly well, but you can programmatically attach any number of marks after collection, and then filter out the tests you want/don't want using them with the -m flag. I'm curious to know more about your complex system, and if this approach would work. If not, I'm sure there's other ways to simplify things and I'd be happy to help explore some options. |
It's for integration tests with hardware devices running on multiple platforms and multiple software, so knowing if a test applies depends on all that (and it could e.g. only apply on some devices). We also have a more traditional way to group these things and know what to run, but I want to also have a more generic way so that tests can run against as many platforms/software as possible. It's indeed possible to use a hook and do arbitrary things at collection time, but fixtures are already familiar and already contain the necessary data to make the right decision, so it's more a way to bridge the gap. |
This is simply for me, fixtures are a life cycle management and dependency injection system, its unavailability at collection time means people are forced to implement poor sidechannels The only real reason they are not avaliable is how things initially grew, once runtestprotocol is broken down there are no more technical reasons not to do it |
@RonnyPfannschmidt gotcha, but then how would we be meant to distinguish between fixtures necessary for a given test, and fixtures meant for defining what tests exist? Or is that the ideas you were referring to? @iwanb ah I think we've talked about this before. I don't recall how that makes marking sections of tests not scalable, though. |
@SalmonMode there is no difference if you need a connection to cooperate resources for both obtaining a inventory and later specific systems connections, then both the parameterization and the test will request the fixture for that resource, one part will use it to get the parameter list, the other part will use it to get to individual elements |
@RonnyPfannschmidt I'm not 100% sure I follow, so I'll rephrase, and you can let me know if I missed something. If I use an HTTP client to fetch some resource that contains iterable information that I would pump into a parameterized fixture's params arg in order to generate multiple tests (one for each item in the iterable information), those tests may then need that HTTP client, in combination with the item they received as a result of that parameterization, to make additional requests in order to make the assertion(s) they're meant to. So both the parameterization process and the steps needed to be preformed by the tests required that HTTP client. Is that accurate? |
Yes |
Gotcha. I would still say those are still 2 separate things, despite a shared resource, and I do still believe that would be a red flag in terms of test design due to the lack of control of what tests will be performed, and being beholden to whatever state the system is already in, as opposed to putting the system into the state that the test you want to run is meant to run from. |
From my pov fixtures primarily manage the life cycle of resources As far as im concerned your distinction has no practical value, if pytest still operated like the 1.x series, there wouldn't even be a distinction between collect time and test time because it would be interweaved Why gatekeep a resource management tool and force people to reinvent /reintegrate own ones |
I don't mean to gatekeep, and I feel like my ideal definition of fixtures falls within your own. The distinction I draw is based in practicality. A test is only as good as it is repeatable. If your tests don't know what they're going to do before they start running, and they're beholden to the state of the system as it already is, then they aren't completely repeatable because they aren't in control. Pytest's fixtures as they stand now, from my pov, are a perfect system for laying out the steps of a repeatable test, and for describing the resources those steps depend on. They describe the essence of what a given test is; not a series of steps to find out what tests can be done. Again, that's just my pov. My goal isn't to force others to write tests a certain way. It's to encourage them to write tests that are repeatable and in complete control of the SUT. I'm not really gonna complain if such a feature as this is implemented, since I can just not use that feature. But I did want to throw my 2 cents out there. |
For my use case I would split fixtures into 'data' ones and actual test resources, and only allow data during the collection time. But that would be more a convention like avoiding side effects at import time is for modules. |
Closing this because it's been a long time since the last update, and because for such a complex proposal I'd prefer to see a proof-of-concept in a plugin before we look at merging it into Pytest core. |
TL;DR: Is it possible to use the result of the doctest_namespace fixture in the
collection phase? If not, what is the name of first hook after the autouse
fixtures have been called?
I'm the maintainer of pytest-sphinx, which is a plugin that adds support for
running tests defined in doctest related rst directives.
Those directives are implemented in the
sphinx.ext.doctest
extension.One cool feature of
sphinx.ext.doctest
is that directives can be skippedconditionally by specifying an option inside the body of the
directive. Something like
If the result of the
skipif
expression is a true value, the directive is omittedfrom the test run just as if it wasn’t present in the file at all.
In
sphinx.ext.doctest
the globals used for evaluating theskipif
expressions are taken from the evaluation of the code in the
doctest_global_setup
configuration value (in conf.py), e.g.,Instead of evaluating a code-string in pytest-sphinx, I think it makes more sense
to use pytest's
doctest_namespace
fixture:How should pytest-sphinx handle the creation of test items in the following example?
Every testcode directive should be (regardless of whether the directive is
skipped or not) part of a pytest.Item. Since the skipif expressions can't be
evaluated in the collection-phase AFAIK, the unskipped testoutput directive,
belonging to the unskipped testcode directive, has to be determined as soon
as the doctest-namespace is known. This is tricky, but probably it is
doable, right?
Do you have a better idea for supporting the skipif option in pytest-sphinx?
Probably it would be a lot easier if doctest-namespace were not used,
because then we could evaluate the skipif expressions at collection-time.
The text was updated successfully, but these errors were encountered: