-
-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to test session-scoped fixture clean-up when writing plugins? #4193
Comments
Hi @butla, I think you got the gist of it. One solution I can think of is to write the volume id somehow to stdout or a known file location (inside Others might suggest more elegant solutions. |
I'm gonna submit a PR to the docs about testing plugins if we get an elegant solution or if this thread :-) I'll go with your suggestion - seems to be robust enough. |
Great, appreciate it! 😁 |
So the approach I took [here] (https://github.com/AndreLouisCaron/pytest-docker/blob/master/tests/test_integration.py#L42) can be described like this:
Any ideas for a more elegant aproach? If there aren't any, I'll submit that for the docs. |
I think that approach is fine @butla, I've used it myself on the past. |
I'm trying to modify tests for pytest-docker. One thing that I'd like to add is testing that the session-scoped
docker_services
fixture (which spins up and later destroys a Docker Compose environment) gets rid of the Docker volume it creates.All of the approaches I have in mind have problems:
The problem: All of the names are generated randomly. I don't know ahead of time how the volume will be called, so I can't check if it's gone. I could count the volumes before the run and after the run, but that would be flaky if some other process on the machine is working with Docker in the background.
The problem: Any fixture or test I can create that references
docker_services
will have a smaller scope, so I can't run any test code after the fixture is cleaned.Any thoughts on the subject? Is there something obvious that I'm missing?
The text was updated successfully, but these errors were encountered: