New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upload performance OC 8.2.1 (50 mysql queries per file) #20967
Comments
We have indeed a big MySQL issue here... I can confirm this. |
@icewind1991 @LukasReschke some more profiling? 🙊 |
Is Redis enabled as file locking provider? If no: Pretty sure it's file locking. 🙊 |
Indeed, I used only APCu. But I can't disable filelocking, I tried So, I tried to enable file-locking using a redis-server. I installed from debian (jessie) which comes in version 2.8.17. I run more or less the standard config, except that I use the socket to connect to redis. I disabled apcu (https://www.en0ch.se/how-to-configure-redis-cache-in-ubuntu-14-04-with-owncloud/) but the admin page now complains Redis seems to be used as the warning the the database is used for file-locking disappeared. Before I could see many queries for a file-locking table, but such queries disappeared as well. Unfortunately the performance got worse:
I only used the standard redis-configuration, I perhaps I need to tune some parameters to use memory more efficiently. Is this normal in the logs?
|
@tflidd Would it be possible for you to install the free blackfire.io tool on your server? Then just upload the file in your browser, open the developer tools, go to the network console, copy the request as cURL (to upload.php) and in the command line type "blackfire curl theRestOfTheCurlCommand" After this sharing the trace (there is an option for public sharing) would help us a lot. (ah. And technically there is also a 15day trial for the paid version. You might try to enable that one as well as it gives more infos ;)) |
@tflidd Have you removed the memcache.local from your config? You need both, the memcache.locking and memcache.local |
Thanks for your ideas. I will have to verify my caching-setup with the docs, it's possible that I missed something. If it doesn't improve, I will try to install blackfire.io on my RPi2. |
Why are we talking about Redis, when the problem started with MySQL database. Even so, MySQL is default. |
@Danger89 oC 8.2+ is using the database for file locking as a default (previously this wasn't the case and only redis was used when configured) which might cause more SQL queries than before. If oC is configured to use redis for file locking those queries are not done within MySQL. |
Well, I was using APCu with the same drawback: large amount of small files takes forever. Ps. I was using Debian 8, which has release 8.1.4. And previously I ran Apache webserver. I now setup Redis with memcached and Transactional File Locking enabled. I also switched to Nginx server and php-fpm. Let me check if I see any improvements or not. |
Ok, it took a bit longer. Problem was, that blackfire.io does not work on ARM. So I installed OC 8.2.1 on a virtual machine (Debian 8 system, it is apache+mod_php instead of nginx+php-fpm). There are still a lot of SQL queries. Now, I run blackfire.io like @LukasReschke described (hopefully I understood correctly):
|
small update: I uploaded directly via webdav and tried to trace it. |
The OC\Files\Cache\Updater::update-function is responsible for a lot of queries (update mtime and size for all parent folders). I uncommented all the updates inside this function to have a benchmark for the file transfer. All the 1180 files (18 MB) are transferred in less than 90s (before >20 min). In the So I uploaded the 1180 files, it takes a bit less than 20min but still much longer than the 90s. |
Helpful traces. Thanks a lot, @tflidd 🚀 😄 👍 |
@icewind1991 Care to take a look? THX |
Top 3 bottleneck update-queries:
|
9.0 should have less I have an idea how to reduce the number of |
Are there plans for a bulk upload? Taking my 1200 files with a total size of 18MB, it would take less than 1 minute to upload these files directly via FTP. At this point it takes more than 20 minutes. For OC9, there are some improvements, but it doesn't help much if you shorten the time from e.g. 22 minutes to 18 minutes. We should think about an upload process where this time is reduced to less than 5 minutes. The client could use something like the zip-streamer to combine all the files and then all the filecache-commands are run once on the server. Other idea is to just create the filecache-entries for the new files and folders and move the calculation of the size to a cronjob (or add a command for the oc-client that can trigger such an update). |
@tflidd good point! Indeed post-calculations should be done at the end or cronjob. The current implementation / protocol is not efficient enough. A redesign may be necessary. |
Some discussion about short-term mitigations on database level where done in owncloud/client#331, especially in owncloud/client#331 (comment) |
Also related to (and perhaps one of the two should be considered duplicate): #11588 |
I checked again on my RPi2 with Debian 8 (mariadb, redis) and upgraded from 8.2.1 to 8.2.2 to 9.0beta1. For better comparison, i used the method of #11588 (comment) (10000 files). Here the results:
Data were estimated after upload of 40 min (50 min for OC 9.0beta1). |
I tried again on a machine, vserver (debian 8 + mod_php+redis+mysql), there is other stuff running so I couldn't make stats of the db-queries. Results are based on 10 minutes upload: On the OC 9.0beta1 setup:
Just the first minute with slower download, I caught this on my slow-query.log (>2s):
Perhaps I messed something up with my configuration. |
With some mysql-tuning (https://forum.owncloud.org/viewtopic.php?f=31&t=30083#p95636) and these parameter in my.cnf
I was able to upload more than 1000 files/minute. The total upload time for all 10000 files was 9min 28s. This is only a workaround. There is a risk data loss when the system crashes. More reports on this modification (one user wasn't able to restore the database after a hard-reset of his system): |
Considering the log from Feb 14, it mentions locking - you sure that redis takes care of the locking, is that configured properly?
has to be there. Just curious! Now if Redis does take properly care of file locking then the problem is in another area. Honestly I'm amazed you get it under 10 min on a Pi. While there's sure room for improvement (and #11588 (comment) will already have helped) I think we should keep in mind that the only real solution to this problem is batching. |
I did use redis (#20967 (comment)), this is from my config.php:
However, the big performance improvements were only possible on the virtual machine, on the raspberry pi 2, I didn't exceed 150-180 files/minute. The bottleneck in the raspberry pi is the database who uses all i/o-operations. I use the SD-card where I can write with ~10 MB/s (vserver > 100 MB/s). |
The fastest local memcache would be APCU, on the Pi it's a bit more complicated as APCU eats more memory (two caches instead of one). But on the VM, APCU should speed things up a little more. Only for memecache.local! |
Still not fixed? |
@Danger89 is this happening for you in a recent OC version ? |
@mrow4a what do you think ? |
@PVince81 You mean, trying to reduce the number of inserts or optimize the query? Probably this is yet another big project. I think I could have a look and analyse this as a next project. However, I am big fan of what guys at CERN have with EOS. |
@mrow4a is already working on optimizing SQL queries on upload and other file operations for 10.0, some fixes made it in already. Please retry there. |
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
Upload of many small files is very slow. I tested on a raspbery pi 2 which is known to have a bad i/o-performance. During uploads, mysql-operation create an important io-wait on the system. It was worse with journaling of ext4, so I disabled it (it's a test system).
Uploading a folder with 1180 files (18 MB) via the oc-client takes about 20-30 minutes (pictures, a lot of text files, take any php code of a project), but the number of sql-queries is extremely high (>50 queries per uploaded file!):
For upload, a smaller number of queries can be obtained when files are uploaded in an external folder via ftp (takes a few seconds) then this folder is included into owncloud, the scanning process takes a few minutes (it's hard to tell, 5 minutes?) and the number of queries (some queries are due to handling the web-interface):
I remember you worked on the deletion process for OC 8.0, this scales much better: 3000 files take 3500 update and 1400 select queries. Perhaps there is a similar improvement possible for upload operations ;-)
System: RPi2, Debian Jessie
Webserver: nginx/php-fpm
Database: mariadb
Owncloud 8.2.1
Apps enabled:
I haven't tested with a pure webdav-client yet.
The text was updated successfully, but these errors were encountered: