New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change all tests under integration to use pytest idioms. #1559
Conversation
And fix one missed exception handling case.
Looks like all my tests were still old style, sorry 'bout that: |
No problem, I'm on to that this morning. |
I have reverted the removal of the two bfpixelsstore test files for now in order to get this PR merged. These tests can be discussed later. |
Some of these tested have been marked as xfail since any trivial changes to fix them by adding users to another group or not removing them from their only group would essentially invalidate the test. Some have had group removal clean-up deleted. This may also need addressing differently.
Sorry for the hold up & confusion, @ximenesuk. As long as their is a ticket to bring back the tests, removing is also fine. Nothing has changed in that stack for 4.4.9, so it's not like we'll be worse off. |
@joshmoore No problem - you had an important date to celebrate. I'll re-remove the files and open a ticket for their review/full implementation under the Java integration tests. |
@ximenesuk : a separate PR will mark failing tests as xfail? |
Also, do you have the list of failing tests from before this PR? Did any get added via this PR? (I can run locally if that would help) |
There are test marked with xfail within this PR. The problem with the remaining failures is that they are intermittent - should those really be marked as xfail? We could use a specific marker pytest.mark.intermittent_fail ? But this set is a moving target from looking at recent fail logs on hudson (+6 suddenly for search, say). The second case is those that pass locally but not on Hudson. Again I'm uncomfortable simply marking these tests as xfail to get hudson green. However, if this PR can be got into place then we can discuss how these two areas are resolved which will certainly be with separate PRs. I don't have a list to hand of those tests failing before. Also, in the course of this PR there have been merges which have subsequently broken previous passing tests. |
Few commit comments made, otherwise looks good. I'd say let's get it in as fast as we can. |
This final commit can be tested by using the build system for running tests. For example,
should run the entire integration suite while skipping tests and classes marked with the decorator
should still work. |
MARK works both empty ( |
Change all tests under integration to use pytest idioms.
--rebased-to #1622 |
This PR modifies all OmeroPy integration tests to use pytest idioms such as Python assertions and with for exception handling. Some tests have been fixed along the way and some tidying up has been done. The initial aim of this PR is for the tests to be run as part of a merge build which will help identify differences between local test results and those on hudson.
To test this PR use a running locally built server and from within
components/tools/OmeroPy/
use the following to run the entire suite:Please do remember to use the
-m
flag otherwise the tests may take up to two hours due to two integration test files! You can run any of the three individual targets for convenience as below if necessary:The expected result here is that all the tests run and there are no errors. It is acceptable for this PR if there are a few failures and they will be some skips, xfails and possibly xpasses.