Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed Local::deleteDir #1136

Closed
wants to merge 1 commit into from
Closed

Conversation

modevelops
Copy link

Solving the issue reported here: #1135

Solving the issue reported here: thephpleague#1135
@GrahamCampbell
Copy link
Member

Is there a way to replicate the original issue with a unit test?

@frankdejonge
Copy link
Member

@modevelops are the directories being written to during deletion?

@modevelops
Copy link
Author

Is there a way to replicate the original issue with a unit test?

The problem does not occur on most of the servers so I guess there is no reliable way for unit testing this. I don't know yet what server environment causes the problem. I know clients having a hostpoint.ch server and running into this problem.

@frankdejonge
Copy link
Member

@modevelops are they using a NAS perhaps?

@frankdejonge
Copy link
Member

Basically, it seems like this is a race condition and your fix will not prevent it. It might limit the window, but it'll still occur. For the next version all listings will be generators to account for large listings, effectively requiring your fix to be undone. My guess is that you need some sort of locking mechanism instead. Which is often a something missing in NAS situation, for the same reasons this bug occurs.

@modevelops
Copy link
Author

@modevelops are the directories being written to during deletion?

This was my first thought too, but no! I logged every action in the method in a file and entries were completely skipped. When I removed the unlink for files, no files were skipped anymore. Iterating an array instead solved the problem. We used my solution for several clients reported this problem and it works fine.

@frankdejonge
Copy link
Member

frankdejonge commented Feb 24, 2020

@modevelops It's still unclear to me why it works, before we can narrow that down I'm not confident to put this into the package.

@modevelops
Copy link
Author

modevelops commented Feb 24, 2020

@modevelops It's still unclear to me why it work, before we can narrow that down I'm not confident to put this into the package.

The pointer of $contents changes during the foreach loop if a file gets deleted, because $contents itself changes in this process. For me it's also totally unclear why this happens only on specific servers. Would a sample script be helpful?

@frankdejonge
Copy link
Member

@modevelops It's still unclear to me why it work, before we can narrow that down I'm not confident to put this into the package.
The pointer of $contents changes during the foreach loop if a file gets deleted, because $contents itself changes in this process. For me it's also totally unclear why this happens only on specific servers. Would a sample script be helpful?

That would be very helpful.

@frankdejonge
Copy link
Member

Btw, @modevelops what is the software this is integrated in?

@modevelops
Copy link
Author

I try to get the sample script done on wednesday. It's integrated in the shop software Gambio (gambio.com).

@frankdejonge
Copy link
Member

@modevelops would be great if you can find out more about the environment on which your software is deployed (being NAS or not, being symlinked or not, etc).

@modevelops
Copy link
Author

modevelops commented Feb 26, 2020

@frankdejonge I wrote the sample script and I am waiting for the permission of our client to share the link to the script with you.

Here are interesting insights:
On the client server only the first 156 files of every sub directory get deleted, the rest is completely skipped in the foreach if the unlink is executed within the loop. The foreach loop jumps over to the next directory after 156 files in a single directory. Without the unlink within the loop, the foreach does not skip files. Iterating an array instead of a RecursiveDirectoryIterator object solves the problem even with 10000 files.

EDIT: A "not" was missing.

@frankdejonge
Copy link
Member

I just ran this without problems:

<?php

use League\Flysystem\Adapter\Local;
use League\Flysystem\Filesystem;

include __DIR__ . '/vendor/autoload.php';

$filesystem = new Filesystem(new Local(__DIR__.'/test_files/'));

for ($i = 0; $i < 25000; $i++) {
    $filename = 'something_' . $i . '.txt';
    $filesystem->write('dirname/' . $filename, 'this');
}

var_dump($filesystem->deleteDir('dirname'));

@modevelops
Copy link
Author

I bet this will fail on our clients server. I'll try it tomorrow. My script is very similar. As I said before, it's no general problem. It runs smoothly on most machines.

@modevelops
Copy link
Author

@frankdejonge I sent you an email with the sample script.

@modevelops
Copy link
Author

FYI: A Network File System (NFS) is used on the server with the problem.

@stale
Copy link

stale bot commented Apr 28, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants