Open
Description
Upsides:
- Can make test setup/teardown faster
- Since we have to run fixtures in independent tasks anyway (see It's very annoying that fixtures can't yield inside a nursery/cancel scope #55), running them concurrently was easier than running them sequentially.
Counterarguments:
- Might increase the chance of flaky tests, race conditions, etc.
- In particular, race conditions involving
ContextVars
(see Rework fixtures #50 (comment)) – currently fixtures all share acontextvars.Context
, so if multiple fixtures are mutating the sameContextVar
, the results will be random. (Of course, doing this is inherently nonsensical. But nonsensical-and-consistent is still better than nonsensical-and-random.)
- In particular, race conditions involving
- Might make failing tests harder to debug
- Is there really that much concurrency to extract from fixture setup? How many people use multiple independent fixtures that need to block for significant amounts of time?
- Running them sequentially isn't actually that hard, if we let pytest pick the order for us (see Rework fixtures #50 (comment))
Leaving this issue open for now for discussion, and to collect any positive/negative experiences people want to report.
Metadata
Metadata
Assignees
Labels
No labels