New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize music sync for large libraries #387
Optimize music sync for large libraries #387
Conversation
@n-buck if you can, I'd appreciate a test with your giant library to see if this works properly. Be warned that it's going to take a bit of time. As a comparison, my 9300 song library takes just over 3 minutes, and it's largely dependent on local device speed. |
I've tested this pull for Kodi 18 version (on coreelec, latest stable version) and it's working great. Thanks a lot! |
Kudos, SonarCloud Quality Gate passed! 0 Bugs No Coverage information |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unless there's a good reason to keep the batch object (anything other than Items
being used?), I'd prefer if the for item in batch['Items']
loop was moved a couple of functions up (into _get_items
).
That's something for another pullrequest however.
@mcarlton00 I just tested it and it works flawlessly with my library. |
Instead of pulling all resources from the server in a single api call, utilize the "Paging - Max Items" setting and pull them in batches. Default value is 15 items per batch. Loads them with a generator so we can start processing them locally while still loading results from the server.
Changes the sync order a bit because of the way the loading happens. Instead of processing all of artist A, then moving on to all of artist B, it now syncs all artists, followed by all albums, with all songs at the end.
Slightly slower sync speeds than the previous iteration, but should be much safer when large libraries are concerned, and still much faster than 0.5.8. Fixes #386 and should prevent another situation similar to #380 from happening.