-
Notifications
You must be signed in to change notification settings - Fork 111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BF: Don't skip subds underneath paths in reckless & recursive drop #7308
Conversation
af804e0
to
ecb94d4
Compare
This fixes datalad#7013. If a drop --recursive --what all --reckless kill was provided with paths to directories, it silently skipped killing the contained subdatasets. The problem arises when the internal resolving of paths to datasets determines that the provided directory path belongs to the superdataset. Instead of then recursing into the subdirectory's potential subdatasets, drop restrained the scope of what was to be dropped to only 'filecontent' in order to limit the drop from the determined superdataset to the provided path as an internal safety mechanism: if paths is not None and paths != [ds.pathobj] and what == 'all': # so we have paths constraints that prevent dropping the full dataset lgr.debug('Only dropping file content for given paths in %s, ' 'allthough instruction was to drop %s', ds, what) what = 'filecontent' This change lifts this safety mechanism partially: Whenever reckless is set to 'kill', and the provided path does not match the currently investigated dataset, it adds a subdataset call constrained to the path in question. If this returns a dataset, we know that this dataset is meant to be dropped as well, as recursive (which has to be set whenever reckless==kill) must be set, too.
The failure is unrelated:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @adswa!
Overall reads right, but I think it needs a minor change. Correct me if I'm wrong.
Analysis results are not available for those commits View more on Code Climate. |
I've cancelled the appveyor build because a few time sensitive CI runs in next and dataverse have been waiting for hours. |
I have restarted the appveyor build I cancelled on Friday. There is one Travis run that failed, but it looks like something in the test setup glitched and caused widespread failures |
AppVeyor fails are #7320 Travis failed with only one build and this one is - once more, sigh - just completely failing to start up the http server for the tests. However, that build should not be relevant WRT to correctness of this PR. |
This fixes #7013.
If a drop --recursive --what all --reckless kill was provided with paths
to directories, it silently skipped killing the contained subdatasets.
The problem arises when the internal resolving of paths to datasets
determines that the provided directory path belongs to the superdataset.
Instead of then recursing into the subdirectory's potential subdatasets,
drop restrained the scope of what was to be dropped to only
'filecontent' in order to limit the drop from the determined
superdataset to the provided path as an internal safety mechanism:
This change lifts this safety mechanism partially: Whenever reckless is
set to 'kill', and the provided path does not match the currently
investigated dataset, it adds a subdataset call constrained to the path
in question. If this returns a dataset, we know that this dataset is
meant to be dropped as well, as recursive (which has to be set whenever
reckless==kill) must be set, too.