Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Interrupted upload caused orphaned files in uploads/ and exception when attempting clean #31226

Closed
dflogeras opened this issue Apr 20, 2018 · 7 comments

Comments

@dflogeras
Copy link

I'm running owncloud 10.0.7, with php-7.1.16 on Gentoo. The server is configured with nginx + php-fpm, and uses the sqlite backend.

A user was uploading some files which got interrupted. While doing routine maintenance, I noticed that the user had 1.7G of data trapped in the data/USERNAME/uploads/ directory. There are about 10 subfolders named web-file-upload-XYZ/, and all but 1 are empty. The non-empty one has several file chunks in it.

I would like to clean this, but realize that you cannot simply delete the folders. Dumping the database I can see there are references to these specific web-upload folders.

I attempted to run sudo -u nginx php ./occ files:scan MYUSER, but that resulted in an exception, with the following:

The process control (PCNTL) extensions are required in case you want to interrupt long running commands - see http://php.net/manual/en/book.pcntl.php
Starting scan for user 1 out of 1 (MYUSER)
Exception during scan: ErrorException: Undefined index: size
#0 /var/www/localhost/htdocs/owncloud/lib/private/Files/Cache/Scanner.php(420): OCA\Files\Command\Scan->exceptionErrorHandler(8, 'Undefined index...', '/var/www/localh...', 420, Array)
#1 /var/www/localhost/htdocs/owncloud/lib/private/Files/Cache/Scanner.php(381): OC\Files\Cache\Scanner->handleChildren('', true, 3, '1698', true, 0)
#2 /var/www/localhost/htdocs/owncloud/lib/private/Files/Cache/Scanner.php(315): OC\Files\Cache\Scanner->scanChildren('', true, 3, '1698', true)
#3 /var/www/localhost/htdocs/owncloud/lib/private/Files/Utils/Scanner.php(238): OC\Files\Cache\Scanner->scan('', true, 3)
#4 /var/www/localhost/htdocs/owncloud/apps/files/lib/Command/Scan.php(224): OC\Files\Utils\Scanner->scan('/MYUSER', false)
#5 /var/www/localhost/htdocs/owncloud/apps/files/lib/Command/Scan.php(310): OCA\Files\Command\Scan->scanFiles('MYUSER', '/MYUSER', false, Object(Symfony\Component\Console\Output\ConsoleOutput), false, false)
#6 /var/www/localhost/htdocs/owncloud/lib/composer/symfony/console/Command/Command.php(262): OCA\Files\Command\Scan->execute(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
#7 /var/www/localhost/htdocs/owncloud/core/Command/Base.php(159): Symfony\Component\Console\Command\Command->run(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
#8 /var/www/localhost/htdocs/owncloud/lib/composer/symfony/console/Application.php(826): OC\Core\Command\Base->run(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
#9 /var/www/localhost/htdocs/owncloud/lib/composer/symfony/console/Application.php(189): Symfony\Component\Console\Application->doRunCommand(Object(OCA\Files\Command\Scan), Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
#10 /var/www/localhost/htdocs/owncloud/lib/composer/symfony/console/Application.php(120): Symfony\Component\Console\Application->doRun(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
#11 /var/www/localhost/htdocs/owncloud/lib/private/Console/Application.php(161): Symfony\Component\Console\Application->run(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
#12 /var/www/localhost/htdocs/owncloud/console.php(106): OC\Console\Application->run()
#13 /var/www/localhost/htdocs/owncloud/occ(11): require_once('/var/www/localh...')
#14 {main}

+---------+-------+--------------+
| Folders | Files | Elapsed time |
+---------+-------+--------------+
| 1 | 1 | 00:00:00 |
+---------+-------+--------------+

@ownclouders
Copy link
Contributor

GitMate.io thinks the contributor most likely able to help you is @PVince81.

Possibly related issues are #18549 (File not found exception), #1373 (Uploading files issue), #19805 (Exception: Please upload the ca-bundle.crt file into the 'config' directory.), #30761 (OCC does not see S3 directory when created via AWS CLI), and #25755 (home storage not writable).

@patrickjahns
Copy link
Contributor

For cleaning chunks, please have a look at dav:cleanup-chunks ref: https://doc.owncloud.org/server/10.0/admin_manual/configuration/server/occ_command.html#dav-commands

@dflogeras
Copy link
Author

Thanks, that indeed cleaned up the user's uploads/ directory (and some in other accounts). Afterwards I re-ran the occ files:scan MYUSER command, with the same exception, so still an issue but probably the stale upload was a red herring?

@patrickjahns
Copy link
Contributor

Please check #28031 - your files might have an permissions error and thus the filescan might fail. Other potential things that could break the filescan:

  • symlinks (pointing into nowhere)
  • file encoding issue (i.e. using ö and not having the proper locale set)

Just a slight remark - I wouldn't recommend using sqlite on any production instance

@dflogeras
Copy link
Author

Thanks for the reference patrickjahns, it was in fact because I have mounted a separate disk for that user's data directory and the lost+found directory (owned by root) was throwing it off. In the end, working as expected.

Is the sqlite implementation buggy, or just can't handle larger workloads elegantly? I will probably switch it to postgres at some point, but in this case, even in 'production' at most it'll only ever have one or two users concurrently.

In any case we can probably close this bug as user error?

@patrickjahns
Copy link
Contributor

Is the sqlite implementation buggy, or just can't handle larger workloads elegantly? I will probably switch it to postgres at some point, but in this case, even in 'production' at most it'll only ever have one or two users concurrently.

The database implementation itself is not buggy - but it is not recommended for production use - as it is not concurrently usable - side effects might be present that could cause potential harm.

Migrating databases from one database engine to another (sqlite -> mysql / postgresql / xx ) is not an easy task to perform. And thus I'd recommend to use a different backend from the get go

In any case we can probably close this bug as user error?

done ;-)

@lock
Copy link

lock bot commented Jul 30, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked as resolved and limited conversation to collaborators Jul 30, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants