-
-
Notifications
You must be signed in to change notification settings - Fork 862
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Resolve high CPU usage when performing DB reads #419
Conversation
* Disable automatic indexing as we specifically create the required indexes
* Tell SQLite to store temporary tables in memory. This will speed up many read operations that rely on temporary tables, indices, and views. * Add links & reasoning behind other PRAGMA settings used
* Add new index specifically for driveId & parentId paring
@norbusan Question is - how to 'fix' this properly? The easiest way potentially would be to 'revision' the database internal counter, as, when I was coding all the CentOS / Fedora fixes, to overcome the DB issues, if the DB version was < X it would re-create the tables & database. Worth testing this increment to resolve this situation as well? Potentially it 'should' but I do not know if the index's would still be hanging around / would get auto created correctly - would need complete testing. |
Yes, that is probably the best idea. In shotwell (photo editor) where I also contribute(d), there is a database table version saved in the DB, and the program compares and does some update routines depending on the version. So asssume we are at version 1 now, and bump it in the program to version 2. On next run the program sees that the saved DB is actually version 1, and does add the index and updates the version saved in the database. That works nicely. In this case, I am not sure if calling the create index on every database open sounds like calling for problems? I don't know what sqlite is doing in case there is already an index. |
* To force DB schema & index creation, bump DB schema version
@norbusan
When this gets pushed into Master, everyone's DB will get updated to support the new index |
* Update handling of skip_dir and skip_file parsing - should only check if the file is excluded if the parent directory is not * Add another index for selectByPath database queries
* New build option to get more DEBUG symbolic information
* Update ldc2 debug handling
* Use boolean values rather than on / off values * Enable auto_vacuum for entry deletes / database cleanup
@abraunegg I am for merging this, it has shown its use already and fixed several participants problems. |
Agreed - merging |
I don't see that this/my issue is fixed, because in 45s interval my CPU is 30s on 100% with onedrive v2.2.6-21-ga9795dd, or I do something wrong? :/ |
@rednag If not - please 'delete' the items database file and test / try again. Also, as this code is merged into master, with v2.3.0 pending - please ensure you rebuild your client from 'master' |
This is 'database' re-creation - taking OneDrive JSON data and processing it.
This is scanning the database / validating the contents At the end of this sequence, you will see:
This is where the client is now performing a 'walk' of your sync_dir to ensure that all files / folders are actually uploaded. This is where your 100% load is most likely coming from now. This is currently normal application behaviour. So you have a couple of options:
If you are still having issues, please:
|
Thank you for your support :), how long does this sequence take Mär 25 06:38:34 CX onedrive[5399]: Processing JHPSAEM7E2QSZJ7ORDIPHTRXH5T5SINF Then it is still validating?! |
That is the DB validation sequence. There should be little to moderate CPU load depending on the number of files within the local database. Prior to this PR , a DB index was missing, thus causing excessive CPU. How long should that process take? It depends on many factors - CPU speed, memory speed, disk I/O ... |
After 9h it is still validating and there is high CPU usage as well with the 2.3. Maybe there is a better way to recognize changes, instead to monitor all files?! Isn't it possible to observe opened files in a path and just sync or monitor if files opened in the OneDrive directory?! |
@rednag |
@rednag I would even go so far to test your system / setup without using Ubuntu - use a pure Debian or CentOS or Arch Linux. |
@norbusan |
sorry, I run out of ideas. Something is really strange on that computer. Either file reads are slow, or other processes are hogging IO, or whatever. But without actually sitting in front of the system I don't see a good way to debug this. |
edit: |
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
Resolves issues:
#21, #347, #394, #404, #432 (in part, as full file system scanning is still occuring which is being looked at in #433)