-
Notifications
You must be signed in to change notification settings - Fork 669
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MAC Client 1.4.1 very high CPU load #1073
Comments
I too have the same issue on OSX 10.8.5 |
do you confirm that its constantly around 100%. Any chance to strace it? |
yes, i can confirm it. the only way to bring the cpu down is by quittin the owncloud deamon... |
Reported in forum as well. See https://forum.owncloud.org/viewtopic.php?f=14&t=17420 |
Same issue on Windows 7 |
I'm running Linux and it has been suggested to me that it may have to do with my kernel version, which does not seem to be the case ... since Mac and Win platforms are also affected. I have tried logging the client (using "owncloud --logwindow"), but for each sync cycle (every 30 seconds), the log is truncated to 2.3 MB in size when saving. I only see the "last" part of what appears to be a, possibly much, longer log. (I have 10,000+ files of total size 7 GB in a sync folder.) In the thread at https://forum.owncloud.org/viewtopic.php?f=14&t=17420 I have supplied more information about my setup, the periodicity of the CPU load from the ownCloud client, etc. |
Wait a minute folks, I do not see anything near the noted loads on a Win 7 desktop, nor the Ubuntu installation on the same machine (dual boot on a cheap, 5 year old PC with a Pent. Dual 1.8 Mhz processor). Are you saying that a high load is the result of a large number of files in the local ownCloud folder or there is something wrong in the client itself? A recent test, synchronizing ~2000 files/folders produced nothing more than 60% processor usage and that was just during momentary spikes. The local polling on my test system produces nothing more than 20% spikes with almost no duration. Certainly nothing here that affects the normal operation of the PC. |
We need to care for not mixing things up. This and as well the forum thread talk about the load that happens through the local normal local walk through of the local dir tree. We are not talking about an error that would lock up the client forever burning 100% CPU which would be a dead lock. I am wondering why that happens because we switched the method to detect local changes with client 1.4.0 IIRC. From that release on the client relies on the file system change notification of the underlying systems and does not do a local walk any more every 30 seconds as it was before. It does it every 5 minutes now for to make sure that everything is caught because the file system notification is unreliable sometimes for various reasons. So my first advice would be to check if everybody has updated to 1.4.1 please. |
I have used ownCloud for a while, with almost constant solder size over time (10,000+ files and 7 GB). I have my own server and I sync the 7GB folder on three different machines. Two stationary ones and one laplop, all runnning Linux Mint (14 and 15). When I first installed ownCloud (half a year ago), I had problems with the sync procedure being way too slow ... so I gave up. Over time I have upgraded to newer versions of both the server and the client, to see if things got better. Suddenly, when the 1.4 client appeared, things started to work much better. No longer high CPU loads and the sync speed was absolutely OK for me. Recently the server was upgraded to 5.0.12 (from 5.0.10, if I remember correctly) and the client to 1.4.1 (from 1.4.0). After that I have unacceptable CPU loads on the client machines again, in a periodic pattern (60 sec cycle with 20-25 sec of 100% CPU load from ownClout client). The sync seems to work, but at a very high CPU load. Can't say if it happened after the server upgrade or the client upgrade, unfortunately. |
I have certainly updated the clients to 1.4.1 (server to 5.0.12) and though I read about the 30 second interval local walk being done away with, there still seems to be a polling (as named in the log) at the same interval: 10-09 19:47:54:148 * event notification enabled A spike of ~20% CPU usage with almost no duration happens on my test system to coincide with each group of three events. No problem is noted here, but, they do happen. |
Yes, sure they do. That is the regular check (one HTTP request) if something has changed on the server. That is by design and as you say does not cause trouble. |
Thank you for confirmation dragotin. I note that there is no problem on my test system, however, through conversation with oedfors, the same events cause one of the four threads to spike to 100% causing a heat and power issue on his system. Does the client design make use of multiple threads on a hyperthread capable processor? While I am asking; Have the client / server been tested or perhaps built with Linux kernel 3.5 and / or above? |
Ok, we seem to approach the core problem here. It seems @oedfors has a larger data set that is handled. We can revisit the code that does the remote ETag check to see if we can more optimize. @oedfors can you tell us how many files and directories you have on the top level of the sync dir? threads: Yes, we use a thread to do the syncing. But we have no parallel running threads yet, and its questionable if that would help imo. Yes, kernels > 3.5 are in use. Why do you ask that? |
My questions are just to help me rationalize the differences seen between oedfors' system and my test system. While the threads in oedfors' system show great differences (image displayed on forum) i.e. 99.5%, 9%, 10% and 6% usage. The graphs of my CPU usage note two cores with relatively equal usage. Since I have not upgraded beyond Linux kernel 2.6, could this be the reason that I see different load levels? Thank you for the answers provided, now, back to your ideas on oedfors' system... |
The answer to your question is: On the top level of my sync directory I have 5 small files (~100 kB and below) and 24 directories (of which the largest contains almost 4 GB data and the smallest only a few kB). Thank you again guys for assisting in this matter! |
this is the corresponding issue on server side: owncloud/core#5255 |
I can reproduce this when toggling between (manual and system specified) proxy settings. In consequence, I get mirall and csync being called subsequently again and again. Looking into it. |
How many items (files or directories) is there in the toplevel directory. It would be nice to have mirall run trough a profiler to know what is really taking so much CPU |
I was interested in number of file in the root directory, not including subdirectory. (so Is the problem occur when it check for changes in the server (every 30 seconds)? or when it does a full sync to detect changes locally (every 5 minutes)? |
I fixed the bug that was described by @danimo above (changing proxy setting frequently) with commit f841450 . Ignore files were not read correctly. Maybe it should be quickly described why that commit fixed the problem: The ingore file list was not properly read by mirall before. That made mirall not properly ignoring changes to the database file. And that again made the folder rescheduled constantly. With commit d0d3626 I also added that the database files are always ignored to avoid that. I think that could easily have caused this problem. I consider this problem as fixed, but lets leave it open and people retest with 1.4.2. |
Fixed in 1.4.2 |
Thanks for fixing the problem! When is 1.4.2 expected? |
I appreciate this is closed. But FWIW: We have rolled out a test of OC for a test for about 10 users -- we would like to have it service 50+ -. Server: Ubuntu 12.04.3 LE / PHP 5.3.10-1ubuntu3.8 with Suhosin-Patch (cli) / ownCloud 6.0 beta Mac client 1.4.2 on OSX 10.8.2 used a tremendous amount of CPU with no break for hours until I killed and restarted it. It bounced between 90% and 30% every second in top. But mostly it stayed between 70% and 40%. Our test sample has about 400 files in 30 directories totaling less than 1Gb. I surmised that it was synching non-stop 100% of the time for those hours. This was from the first use (which I need to test thoroughly before rolling out to users). OF NOTE: But not sure if there is some use-case for first-time synch with these errors that causes constant synching. Restarted with --logwindow but it's too late. Everything is fine now. |
Hi, I'm runnig OC 6.0.0a with mirall 1.5 and I'm seeing 25% cpu usage every 300 seconds when csync runs. The problem is the csync message log shows Is this the expected behaviour ? the forums seem to say that csync walking procedure (csync open dir, _csync_merge_algorithm_visitor, etc) should take around a second. This is not the case for me but I do have more files. |
@Chluz Actually, the client checks your local ownCloud (sync) folder, every 30 seconds. You are seeing it every 300 seconds, simply because it takes 300 seconds to scan the '40,000' files. The speed of your drive could be an issue, however, attempting to keep 40,000 files in sync (or any operation involving continual access of 40,000 files) certainly gives your laptop a reason to run hot, in any case. Have you considered reducing the number of files (certainly there is not a chance of 40,000 files changing in any short interval of time) or increasing the scan interval of the client? |
We are considering to obsolete the every 5 minutes check, but there is still some work to do. But still I wonder why the 40,000 files check takes so long. |
Hi srfreeman and thanks for your answer. I do see successful sync every 30 seconds, but these only take 1 or two seconds to complete. Every 300 seconds though, it looks like csync travels through a few (if not all directories), doing things that I don't really understand. |
@Chluz Your perceived speed issue may be caused by any number of things, but, looking at just the first part of your log; How often would you think the content of the image and music files contained in the third or fourth copy of Dad's phone backup will change? If you surmise correctly that the answer is 'never'; Why would you subject these files to a process that checks them for changes, whether it be every thirty seconds or even, every three hours? Another question would be; Are you running the Dropbox client on this same device? |
@srfreeman , sorry for the late response, I was without internet. |
@Chluz the documentation for the ownCloud client provides a clear warning: "Syncing the same directory with ownCloud and other sync software such as Unison, rsync, Microsoft Windows Offline Folders or cloud services such as DropBox or Microsoft SkyDrive is not supported and should not be attempted. In the worst case, doing so can result in data loss." I would imagine that once you have rectified the 'two client access' issue and reduced the number of files (from looking further in your log, very few files could benefit from the scrutiny and two way synchronization provided by the ownCloud client), you will see little need to adjust the interval. For simple copying (one way synchronization, if you will) of files from the ownCloud server, I would recommend that you take a look at the many WebDAV clients available. This could provide you with a low resource intensive way of providing off site backup of files. Of course, a decent backup strategy for the ownCloud server could make very little additional copying necessary. From our tests and usage by hundreds of users (US, nation wide through three interconnected data centers), the actual synchronization file load through ownCloud's desktop clients is so low that any question of software efficiency is a moot point. |
hi @srfreeman, just to clarify dropbox and owncloud have the same files, but are running from two different folders (I copied the dropbox folder content to the owncloud folder) I originally used webdav to transfer some content over, but you're right in saying I should clean up the sync folders. I will do that asap, thanks for your help. ----- Reply message ----- @Chluzhttps://github.com/Chluz the documentation for the ownCloud client provides a clear warning: "Syncing the same directory with ownCloud and other sync software such as Unison, rsync, Microsoft Windows Offline Folders or cloud services such as DropBox or Microsoft SkyDrive is not supported and should not be attempted. In the worst case, doing so can result in data loss." I would imagine that once you have rectified the 'two client access' issue and reduced the number of files (from looking further in your log, very few files could benefit from the scrutiny and two way synchronization provided by the ownCloud client), you will see little need to adjust the interval. For simple copying (one way synchronization, if you will) of files from the ownCloud server, I would recommend that you take a look at the many WebDAV clients available. This could provide you with a low resource intensive way of providing off site backup of files. Of course, a decent backup strategy for the ownCloud server could make very little additional copying necessary. From our tests and usage by hundreds of users (US, nation wide through three interconnected data centers), the actual synchronization file load through ownCloud's desktop clients is so low that any question of software efficiency is a moot point. Reply to this email directly or view it on GitHubhttps://github.com//issues/1073#issuecomment-31224431. This message has been scanned for viruses and |
@dragotin : What is this "every 5 minute check" you speak of? Another user is seeing CPU spikes as such every 5 minutes on 1.5.0 client on MAC OS X 10.8.5. Is this check still in the 1.5.0 code? What is it checking? Will it be obsoleted? And if so, what client version will obsolete the check? |
@dragotin any comments on the 5 minute check. Turns out we are seeing the High CPU every 5 minutes. In addition, when the client window is up (ie from the Settings menu), we also see 17-18% CPU...??? |
What we do until 1.5.0 is: We run a full sync run every fife minutes that compares both remote and local side. This is what users might recognise as high CPU even though nothing has changed. Normally, we trigger sync runs if the local file system watcher notifies a change. Unfortunately the file system watchers are not 100% reliable on any of our platforms. In rare cases they "loose" events which would result in a change that is not synced for the user. We want to avoid that and as a compromise we decided to do an extra sync every fife minutes. Now that we face the feedback that it's annoying we were already discussing to skip that fife minutes thing for the 1.5.1 release, but we weren't sure. @ser72 what do you think? @MTRichards ? |
@dragotin I'm hoping this sync can be avoided :) |
To add to this thread: On some filesystems inotify does not work (e.g. AFS) - so I would be careful with this optimization. Also, the inotify evens may be lost on a busy fs if you are not fast enough to grab them from the system queue. So unless you do something really smart you'd have to stay on the safe side... Here are limitations from box.com that someone recently pointed me to: 3.1 Do not sync a large number of files and folders Optimum Performance: 10,000 files & folders maximum On Feb 6, 2014, at 4:33 PM, Chluz notifications@github.com wrote:
|
It is reported that even with 1.5.2 client the CPU on the MAC client is high. See data on S3 at support/HighMacCPU |
@ser72 I checked the logfiles you provided again. This is the crucial part:
The sync is forced again way to early, I will investigate that. It looks buggy. |
@dragotin Thanks. |
The value 62430 secs for the time to last sync is strange, and wrong, and the reason why a the next sync is forced right away. @ser72 could you ask if the user has custom values for remotePollInterval and/or forceSyncInterval in the config file? Thx. |
In @ser72 's logfile all the "time since last sync" are monotonically increasing, although 3 syncs are happening. |
Requested info from the user |
@guruz yes, the reset of |
@ser72 If the user restarts the client, does this issue happen again? |
@jcfischer Does this happen when you restart the client? I believe nothing special occurs to trigger this. If my notes are correct, it occurred after a fresh install of the client. @jcfischer is that correct? |
Also, does this happen with 1.5.3 ? There's a small change in there with could influence this issue. |
Have not tried with 1.5.3 yet. Busy this week with Workshops, will have time on thursday cheers SWITCH http://www.switch.ch/socialmedia On 11.03.2014, at 09:56, Markus Goetz notifications@github.com wrote:
|
@jcfischer Have you had the chance to test 1.5.3 client for the high CPU yet? |
1.5.3 has behaved very well today… But I have seen phases of near constant high CPU load |
@ser72 that is probably the inefficient code we still have in the update phase, but that will be addressed in the upcoming 1.6.0 release. |
I am still experiencing these problems using version 1.6.0 on Ubuntu, server version 6.0.3. However, I do have a big number of files: $ find . -type f | wc -l |
can you
Did it finish the first sync run to download/upload everything? |
The problem happened during the first sync. Finally I discovered the problem. The connection was unstable (over wifi with poor reception). At each time the connection was lost, the whole sync process started over! The client seems to discard everything it already did when the connection is reestablished. Fixed the connexion issues and after 10-12 hours, it finished. |
1.6 has a lot of improvements in this area. I'm closing this as it has been fixed (if you have a stable internet connection). |
since my update to 1.4.1 the cpu load of the ownCloud deamon ist constantly arround 100%.
The text was updated successfully, but these errors were encountered: