-
Notifications
You must be signed in to change notification settings - Fork 145
Test refactor #49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test refactor #49
Conversation
|
ping @chouseknecht |
detiber
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, I like how you've simplified the test generation, and seeing the test coverage go up is an additional plus.
It does appear that the number of tests being run has decreased, is that just a result of duplicated tests that were being run previously?
test/functional/conftest.py
Outdated
| for task in tasks: | ||
| unique_namespaces[task['create']['namespace']] = None | ||
| metafunc.parametrize("namespaces", list(unique_namespaces.keys())) | ||
| # def pytest_generate_tests(metafunc): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you kill these comments?
|
@detiber I think the number of tests being run decreasing is because I'm not currently generating one test per example listed, I'm generating one test for create/patch/replace/get/delete (or 5 tests per resource), which are then run if there are examples for the given verb. So if you have 10 service create examples, it will just show up as 1 test. Personally I hate that, but pytest has really bad support for parameterizing tests based on fixtures (what I'd like to do is have the |
Does this mean that multiple replace/patch/delete examples would be run against the same fixture? I'm wondering if that would lead to potential issues. The other thing that I wonder about is how would we provide tests that we expect to fail? For example trying to mutate an immutable field? |
There definitely is some complexity there, though I think it existed in the previous iteration as well (I don't think all patch tasks can be applied to all create tasks). Best way I've thought of to handle this is to add some metadata to the resource example that ties it to a specific create task. Then, once pytest gets their fixture parameterization working (or before that, just a lot more work), we can parameterize the test with pairs of create/{patch|replace} tasks that we are testing.
This is also something we could handle by extending the example metadata, and just having logic to check if the test is supposed to fail or not and handling the logic for it. Really my biggest want right now is to get each test running one task, that would make both of the above issues much easier to handle. Not sure if you have any thoughts on how to accomplish that, I spun my wheels on it for a while but didn't come up with anything. |
|
It also seems like the tests just broke, likely as a result of issues with patching the resources. |
|
@fabianvf the tests seem to be flaky in general right now, they are failing against master as well. Haven't had a chance to dig in more deeply yet. |
|
@detiber interesting, yeah it seems they are failing with the same tests in the same versions as well. Bizarre, was there an update to the openshift containers those versions are using? |
|
Some documentation of pytest issues; pytest-dev/pytest#349 |
|
@fabianvf PR to your branch incoming related to test failures, the test still fails for the latest image, which actually should be expected because those fields are now immutable, but it turns out that due to a variable name typo the 1.5 tests were running against the latest image. |
f8faad2 to
7cd2866
Compare
8b9a68f to
55a9261
Compare
…ined examples comment out pytest generate test for now remove other tests add support for admin config cleanup, and make generated class names match normal python standards add version and admin metadata to examples for now, needs discussion WIP, better fixture scoping and cleanup ensure deletes only occur after the test finishes fix skip condition so latest isn't always skipped when versions are provided, wait for resource to be deleted Fix version parsing of unprovided version limits Update example to use NodePort in the replace rather than the patch, need to see if this is a bug or not, I think it is Add skips for unprovided examples to prevent needless creation/deletion of resources Fix generator issues in python3 make test skips work in python3 remove commented code exclude build directory from flake8 tests update openshift images and fix typo in env var name remove flake8 section and terminal coverage report from setup.cfg update tests for new release
55a9261 to
788111c
Compare
|
Failing test cases are legitimate failures. |
add module that generates test classes for each resource that has defined examples
Would allow you to add tests for a resource by just writing the docstring example for it.
I wanted to parametrize each of the test functions by the
{create|patch|remove|replace}_paramsbut pytest doesn't currently support parametrizing with a fixture.