Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change testing for direct mode and V6 to exclusion lists #1789

Merged
merged 9 commits into from
Sep 5, 2017

Conversation

bpoldrack
Copy link
Member

@bpoldrack bpoldrack commented Sep 4, 2017

Closes #1562

Approach changed to use decorators skip_direct_mode and skip_v6 for tests currently failing with those builds. In addition there's a comment # FIXME for all of them.

  • create initial lists to represent status quo
  • mark concerned tests with decorators which are raising SkipTest
  • properly set up Travis to make use of the above for DM as well as V6

@codecov-io
Copy link

codecov-io commented Sep 4, 2017

Codecov Report

Merging #1789 into master will decrease coverage by 0.74%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1789      +/-   ##
==========================================
- Coverage   85.46%   84.72%   -0.75%     
==========================================
  Files         265      265              
  Lines       31012    31227     +215     
==========================================
- Hits        26504    26456      -48     
- Misses       4508     4771     +263
Impacted Files Coverage Δ
datalad/interface/common_cfg.py 100% <ø> (ø) ⬆️
datalad/interface/tests/test_save.py 100% <100%> (ø) ⬆️
datalad/distribution/tests/test_subdataset.py 100% <100%> (ø) ⬆️
datalad/interface/tests/test_run.py 98.68% <100%> (+0.03%) ⬆️
datalad/tests/test_direct_mode.py 98.18% <100%> (+0.18%) ⬆️
datalad/crawler/pipelines/tests/test_fcptable.py 47.16% <100%> (-0.84%) ⬇️
...rawler/pipelines/tests/test_openfmri_collection.py 100% <100%> (ø) ⬆️
datalad/support/tests/test_annexrepo.py 95.37% <100%> (+0.05%) ⬆️
datalad/metadata/tests/test_manipulation.py 100% <100%> (ø) ⬆️
datalad/distribution/tests/test_publish.py 100% <100%> (ø) ⬆️
... and 43 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 5440a9a...81d4f5b. Read the comment docs.

Copy link
Member

@yarikoptic yarikoptic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor comment due to overwhelming annotation of the tests -- might be more concise imho

@@ -49,6 +49,7 @@ def test_annexificator_no_git_if_dirty(outdir):

@with_tempfile(mkdir=True)
@with_tempfile()
@skip_direct_mode
def test_initiate_dataset(path, path2):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yikes... Even initiation of a data doesn't work in direct mode?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you mean by "even"? ;-)
Come on - it's currently 91 out of 764. Admittedly it's not single digit yet, but it's not THAT bad, given that we didn't monitor anything but datalad/tests and datalad/support.

@@ -124,6 +126,8 @@ def test_smoke_pipelines():
@serve_path_via_http
@with_tempfile
@with_tempfile
@skip_direct_mode #FIXME
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note that if the entire module of tests failes (probably for issues not to be fixed in that module per se), might be worth just skip it at the module level, e.g. how it is done already for scrapy

try:
    import scrapy
except ImportError:
    raise SkipTest("Needs scrapy")

Copy link
Member Author

@bpoldrack bpoldrack Sep 4, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The entire point is to do it on a per-test level, so we are able to easily ensure we don't worsen the status quo and we can fix that stuff in less complex steps. Have a look at #1562.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, please keep in mind that these are steps towards not having anything fail. We need a quick way to say if there is anything worth fixing at the level of and when touching individual tests. Please keep individual test labeling.

@bpoldrack
Copy link
Member Author

Re "more concise":
Nope. See the comment above and the issue to be closed by this PR. You miss the point, I guess. ;-)

@bpoldrack bpoldrack changed the title WIP: Change testing for direct mode and V6 to exclusion lists Change testing for direct mode and V6 to exclusion lists Sep 4, 2017
@bpoldrack
Copy link
Member Author

If currently running tests are passing, this is ready to merge from my point of view.

@yarikoptic
Copy link
Member

Ok, per test it is
Seems need a bit more work, some tests still fail.
in the future would be a way to quickly enable all those tests to see which might have been fixed for?

@bpoldrack
Copy link
Member Author

Failures in builds https://travis-ci.org/datalad/datalad/jobs/271769397 and https://travis-ci.org/datalad/datalad/jobs/271769397 both appear to be unrelated: datalad.crawler.pipelines.tests.test_crcns.test_get_metadata fails with some HTTP-Request resulting in Error 504.
@yarikoptic : Can you tell whether or not the actual request is okay? And if so: How do we deal with that in the tests?

Some kind of a global switch overriding this and say "ignore those decorators" isn't there, no. Intention was to go through tests, fix for them, remove the decorator. For that you need to remove the decorator and just run that specific test anyway in order to see what exactly to fix for. If it isn't failing because it was implicitly fixed already, well - then you're done at this point. Go on to the next one. :-)
However, I do see the point of temporarily disabling this decorators to see what else you might have fixed whenever you fix one of the underlying issues. But then: Just disabling them isn't sufficient. I you wouldn't easily see, what was skipped before and is passing now.
I think that's a job for a little bash script to find the decorators by a git-grep, outcomment them and only run the tests you found defined in the lines right after those decorators.

@mih
Copy link
Member

mih commented Sep 5, 2017

@bpoldrack I do not follow the reasoning on NOT having an external switch to disable the decorators. My personal workflow is to have a pre-crafted way that I execute nosetests when working on the tests. I usually run a subset of the tests that are related to some functionality. The issues underlying the test failures are most likely not related to functionality that is explicitly tested in one or a few dedicated tests, but are more general violations of direct-mode or v6 assumption. Hence the proper way to run the tests is to run many of them at once. Your proposal would require to hand-edit individual tests, of the subset that I want to run. I don't see anyone of us actually doing that. A config switch would be much more suitable.

@mih
Copy link
Member

mih commented Sep 5, 2017

Maybe a central switch datalad.tests.dont_skip_known_failures would be appropriate.

… enables the skipping of tests (by decorators 'skip_v6' and 'skip_direct_mode' ATM). By this switch the skipping can easily be disabled.
@bpoldrack
Copy link
Member Author

Done. Just vice versa. There is datalad.tests.skipknownfailures now, which defaults to 'yes'.

@mih
Copy link
Member

mih commented Sep 5, 2017

Works for me.

@yarikoptic
Copy link
Member

Morning idea - add a switch/run to the matrix, where all those marked are expected to fail, so decorate would run them, catch exception, and if no exception (ie passes) then it fails. This way we could see right away what/when got fixed, and remove decorator

@mih mih merged commit 869f97e into datalad:master Sep 5, 2017
@mih
Copy link
Member

mih commented Sep 5, 2017

Ah, just saw @yarikoptic 's message. That would be a nice addition.

@bpoldrack bpoldrack deleted the tst-direct-v6 branch September 2, 2020 10:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Use exclusion list to disable select tests from direct mode testing
4 participants