-
Notifications
You must be signed in to change notification settings - Fork 111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change testing for direct mode and V6 to exclusion lists #1789
Conversation
d6d8f83
to
c794e21
Compare
Codecov Report
@@ Coverage Diff @@
## master #1789 +/- ##
==========================================
- Coverage 85.46% 84.72% -0.75%
==========================================
Files 265 265
Lines 31012 31227 +215
==========================================
- Hits 26504 26456 -48
- Misses 4508 4771 +263
Continue to review full report at Codecov.
|
c794e21
to
4166640
Compare
4166640
to
bf4bc69
Compare
bf4bc69
to
7331b03
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minor comment due to overwhelming annotation of the tests -- might be more concise imho
@@ -49,6 +49,7 @@ def test_annexificator_no_git_if_dirty(outdir): | |||
|
|||
@with_tempfile(mkdir=True) | |||
@with_tempfile() | |||
@skip_direct_mode | |||
def test_initiate_dataset(path, path2): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yikes... Even initiation of a data doesn't work in direct mode?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean by "even"? ;-)
Come on - it's currently 91 out of 764. Admittedly it's not single digit yet, but it's not THAT bad, given that we didn't monitor anything but datalad/tests
and datalad/support
.
@@ -124,6 +126,8 @@ def test_smoke_pipelines(): | |||
@serve_path_via_http | |||
@with_tempfile | |||
@with_tempfile | |||
@skip_direct_mode #FIXME |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note that if the entire module of tests failes (probably for issues not to be fixed in that module per se), might be worth just skip it at the module level, e.g. how it is done already for scrapy
try:
import scrapy
except ImportError:
raise SkipTest("Needs scrapy")
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The entire point is to do it on a per-test level, so we are able to easily ensure we don't worsen the status quo and we can fix that stuff in less complex steps. Have a look at #1562.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, please keep in mind that these are steps towards not having anything fail. We need a quick way to say if there is anything worth fixing at the level of and when touching individual tests. Please keep individual test labeling.
Re "more concise": |
If currently running tests are passing, this is ready to merge from my point of view. |
Ok, per test it is |
Failures in builds https://travis-ci.org/datalad/datalad/jobs/271769397 and https://travis-ci.org/datalad/datalad/jobs/271769397 both appear to be unrelated: Some kind of a global switch overriding this and say "ignore those decorators" isn't there, no. Intention was to go through tests, fix for them, remove the decorator. For that you need to remove the decorator and just run that specific test anyway in order to see what exactly to fix for. If it isn't failing because it was implicitly fixed already, well - then you're done at this point. Go on to the next one. :-) |
@bpoldrack I do not follow the reasoning on NOT having an external switch to disable the decorators. My personal workflow is to have a pre-crafted way that I execute nosetests when working on the tests. I usually run a subset of the tests that are related to some functionality. The issues underlying the test failures are most likely not related to functionality that is explicitly tested in one or a few dedicated tests, but are more general violations of direct-mode or v6 assumption. Hence the proper way to run the tests is to run many of them at once. Your proposal would require to hand-edit individual tests, of the subset that I want to run. I don't see anyone of us actually doing that. A config switch would be much more suitable. |
Maybe a central switch |
… enables the skipping of tests (by decorators 'skip_v6' and 'skip_direct_mode' ATM). By this switch the skipping can easily be disabled.
Done. Just vice versa. There is |
Works for me. |
…po.version, which timed out test runs
Morning idea - add a switch/run to the matrix, where all those marked are expected to fail, so decorate would run them, catch exception, and if no exception (ie passes) then it fails. This way we could see right away what/when got fixed, and remove decorator |
Ah, just saw @yarikoptic 's message. That would be a nice addition. |
Closes #1562
Approach changed to use decorators
skip_direct_mode
andskip_v6
for tests currently failing with those builds. In addition there's a comment# FIXME
for all of them.SkipTest