Skip to content

Conversation

@fabianvf
Copy link
Member

@fabianvf fabianvf commented Apr 21, 2017

add module that generates test classes for each resource that has defined examples

Would allow you to add tests for a resource by just writing the docstring example for it.

I wanted to parametrize each of the test functions by the {create|patch|remove|replace}_params but pytest doesn't currently support parametrizing with a fixture.

@fabianvf fabianvf requested review from chouseknecht and detiber May 2, 2017 19:33
@fabianvf fabianvf changed the title [WIP] Test refactor Test refactor May 16, 2017
@fabianvf
Copy link
Member Author

ping @chouseknecht

Copy link
Contributor

@detiber detiber left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, I like how you've simplified the test generation, and seeing the test coverage go up is an additional plus.

It does appear that the number of tests being run has decreased, is that just a result of duplicated tests that were being run previously?

for task in tasks:
unique_namespaces[task['create']['namespace']] = None
metafunc.parametrize("namespaces", list(unique_namespaces.keys()))
# def pytest_generate_tests(metafunc):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you kill these comments?

@fabianvf
Copy link
Member Author

fabianvf commented May 24, 2017

@detiber I think the number of tests being run decreasing is because I'm not currently generating one test per example listed, I'm generating one test for create/patch/replace/get/delete (or 5 tests per resource), which are then run if there are examples for the given verb. So if you have 10 service create examples, it will just show up as 1 test.

Personally I hate that, but pytest has really bad support for parameterizing tests based on fixtures (what I'd like to do is have the test_{create|get|patch|replace|delete} parameterized by the incoming tasks). It's also super awkward with replace/patch becuase you get the combinatorial explosion of create_tasksX{patch|replace}_tasks with the resources fixture.

@detiber
Copy link
Contributor

detiber commented May 24, 2017

@detiber I think the number of tests being run decreasing is because I'm not currently generating one test per example listed, I'm generating one test for create/patch/replace/get/delete (or 5 tests per resource), which are then run if there are examples for the given verb. So if you have 10 service create examples, it will just show up as 1 test.

Does this mean that multiple replace/patch/delete examples would be run against the same fixture? I'm wondering if that would lead to potential issues.

The other thing that I wonder about is how would we provide tests that we expect to fail? For example trying to mutate an immutable field?

@fabianvf
Copy link
Member Author

Does this mean that multiple replace/patch/delete examples would be run against the same fixture? I'm wondering if that would lead to potential issues.

There definitely is some complexity there, though I think it existed in the previous iteration as well (I don't think all patch tasks can be applied to all create tasks). Best way I've thought of to handle this is to add some metadata to the resource example that ties it to a specific create task. Then, once pytest gets their fixture parameterization working (or before that, just a lot more work), we can parameterize the test with pairs of create/{patch|replace} tasks that we are testing.

The other thing that I wonder about is how would we provide tests that we expect to fail? For example trying to mutate an immutable field?

This is also something we could handle by extending the example metadata, and just having logic to check if the test is supposed to fail or not and handling the logic for it.

Really my biggest want right now is to get each test running one task, that would make both of the above issues much easier to handle. Not sure if you have any thoughts on how to accomplish that, I spun my wheels on it for a while but didn't come up with anything.

@fabianvf
Copy link
Member Author

It also seems like the tests just broke, likely as a result of issues with patching the resources.

@detiber
Copy link
Contributor

detiber commented May 24, 2017

@fabianvf the tests seem to be flaky in general right now, they are failing against master as well. Haven't had a chance to dig in more deeply yet.

@fabianvf
Copy link
Member Author

@detiber interesting, yeah it seems they are failing with the same tests in the same versions as well. Bizarre, was there an update to the openshift containers those versions are using?

@fabianvf
Copy link
Member Author

Some documentation of pytest issues; pytest-dev/pytest#349

@detiber
Copy link
Contributor

detiber commented May 25, 2017

@fabianvf PR to your branch incoming related to test failures, the test still fails for the latest image, which actually should be expected because those fields are now immutable, but it turns out that due to a variable name typo the 1.5 tests were running against the latest image.

…ined examples

comment out pytest generate test for now

remove other tests

add support for admin config

cleanup, and make generated class names match normal python standards

add version and admin metadata to examples for now, needs discussion

WIP, better fixture scoping and cleanup

ensure deletes only occur after the test finishes

fix skip condition so latest isn't always skipped when versions are provided, wait for resource to be deleted

Fix version parsing of unprovided version limits

Update example to use NodePort in the replace rather than the patch, need to see if this is a bug or not, I think it is

Add skips for unprovided examples to prevent needless creation/deletion of resources

Fix generator issues in python3

make test skips work in python3

remove commented code

exclude build directory from flake8 tests

update openshift images and fix typo in env var name

remove flake8 section and terminal coverage report from setup.cfg

update tests for new release
@fabianvf
Copy link
Member Author

fabianvf commented Sep 6, 2017

Failing test cases are legitimate failures.

@fabianvf fabianvf merged commit 85131e4 into openshift:master Sep 6, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants