ENOENT bug - Directories not created #38
Closed
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
@wibblymat: this fixes #29, which @omnidan first reported and @BCooper63 found the root cause of.
I needed a patch, so I did the easy/simple thing, which was to expose the cache on the extractors module and clear it out every time extract() is called. But I don't really think it's the correct solution. I suppose the most robust, correct solution would be to attempt the mkdir every time (and catch the error if it fails due to already being there) b/c the directory could hypothetically be deleted at any time by any other process. But that's probably a bit over-the-top, the more practical and expected behavior would be for the cache to be tied to the decompress-zip instance, as @BCooper63 suggests, instead of global to the module, as it is currently. I didn't implement it this way b/c it would've require a deeper refactoring & I wasn't sure if that's the direction you guys want to go with this and, if so, how exactly you would want to implement it (pass the instance's cache to the extractors module?)
At any rate, this pull request fixes the bug in a minimal way for us, but could cause bugs, for instance if two decompress-zip instances are fired up, one just after the other, the second one, when instantiated, will clear out the global cache and could potentially mess up the first one that is in the middle of running...