Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Improve reading speed #1862

Closed
jrarseneau opened this issue Nov 25, 2017 · 28 comments
Closed

[Feature Request] Improve reading speed #1862

jrarseneau opened this issue Nov 25, 2017 · 28 comments
Assignees
Milestone

Comments

@jrarseneau
Copy link

Hello,

I think the new cache remote would benefit from a --chunk-read-ahead=X option that would go fetch that amount of chunks from the underlying remote.

This would greatly help with Plex and streaming media. Without this read ahead, the data being fetched from the cloud remote is always "live" and any retry errors or rate limiting that could occur has the potential of causing the stream to buffer/stutter.

In my testing the new cache remote vs PlexDrive (which has this feature), I can see that rclone is pulling data from Google Drive (the remote I use) much slower than PlexDrive is. This results in intermittent buffering, especially if you have multiple Plex clients streaming at the same time.

So while the new cache feature is great (no more bans when using with Plex), there could be improvements made to enhance the streaming experience and make it more robust.

As an example, here are my rclone mount settings with the cache remote (I've tried bumping the chunk size from 5M to 10M and even to 50M, but without any real difference):

rclone mount gdceMedia: -vv --allow-other --allow-non-empty --cache-chunk-size=10M --cache-workers=5 --cache-writes --cache-db-path=$HOME/.config/rclone $HOME/mnt/media

@ncw
Copy link
Member

ncw commented Nov 27, 2017

The --buffer parameter will control read-ahead - try making that bigger? I'm not 100% sure how this interacts with the backend cache but your experiments would be welcome :-)

@jrarseneau
Copy link
Author

@ncw

I’ve tried a 1Gb and 512Mb buffer without much success.

The chunk read ahead would pull specifically chunks from files actively being read, uniformly. I don’t know if —buffer Works the same way?

For example if I’m streaming 5 videos at the same time, the chunk read ahead would pull chunks from those 5 video files

So if I set the setting to 5 and the chunk size to 10M, then for each file it would pre-fetch 50MB of data.

My understanding in how the buffer works is that If I’ve set a 500Mb buffer, and let’s say I start playing 1 video file, the buffer would fill with data from that file, but subsequent simultaneous streams would be starved for buffer data.

I’d love if you could clarify. Because playback with Plex and rclone is significantly less performant than PlexDrive

@ncw
Copy link
Member

ncw commented Nov 27, 2017

My understanding in how the buffer works is that If I’ve set a 500Mb buffer, and let’s say I start playing 1 video file, the buffer would fill with data from that file, but subsequent simultaneous streams would be starved for buffer data.

You get one buffer per open file.

@remusb what are your thoughts? --buffer will work to some extent and it will buffer the underlying things in the cache, but maybe there is a better solution?

@remusb
Copy link
Collaborator

remusb commented Nov 28, 2017

You can control that already @jrarseneau it's just named differently.
https://github.com/ncw/rclone/blob/master/docs/content/cache.md#--cache-workersworkers

Is this what you meant? I suggest to increase that value by how many look aheads you had. I would also be curious if the performance is comparable to what you were experiencing with plexdrive.

@remusb remusb self-assigned this Nov 28, 2017
@jrarseneau
Copy link
Author

Hi @remusb

It isn’t clear from the documentation that the cache workers will specifically download chunks ahead of the system asking for them.

For example, how can I ensure rclone/cache have prefetched 100MB of any file that is opened, even if the file system hasn’t specifically asked for it yet (other than the initial read).

Multiply this times potentially 10-15 simultaneous streams. With PlexDrive, any file that is open for read will have X chunks read ahead. So if I set the read ahead to 10 chunks and set the chunk size 100MB then any read will cause plexdrive to download 100MB for every file and store it in cache.

Is there a way to do exactly that? Because I think that’s what is giving plexdrive the upper hand when it comes to streaming. If you have 100Mb per file in the cache it’s much more robust to a cloud read error or API rate limiting.

I worry that the cache workers being set to 10-15 or whatever will get potentially starved if there are more than 10-15 read handles open for files and that would cause the next stream from Plex to buffer like crazy.

Unless your telling me that the cache workers is PER read handle being opened. If it’s not like that then I don’t know how it won’t get starved eventually even if you set it to a stupid number like 30-50.

Hope that’s a bit clearer.

@remusb
Copy link
Collaborator

remusb commented Nov 28, 2017

I really should have named that differently :)

Yes, it behaves exactly like Plexdrive in that manner. Any open file stream will have X (let's say 15) handles open to the cloud provider to prefetch the data. The workers are independent for every stream so if you open the same file 2 times you will actually have 30 workers doing the prefetch.

@jrarseneau
Copy link
Author

Ah!

Ok I will try it with these settings and report back:
-vv --allow-other --allow-non-empty --cache-chunk-size=10M --cache-workers=10 --buffer-size=200M --cache-writes --cache-db-path=$HOME/.config/rclone

It does appear cpu usage with rclone is higher than plexdrive. Give me a few days.

@jrarseneau
Copy link
Author

jrarseneau commented Nov 28, 2017

Well I didn’t need a few days.. I had 4 streams going tonight with the above settings and looking at PlexPy, the transcode speed has trouble maintaining 1.0x. With PlexDrive, it’s hitting 7-8x.

I’m not sure if this is specifically cache related or not, from the logs I was seeing a lot of times where Rclone would close the file handle and re-open the file handle during an active stream, which should never happen.

I have to move back to PlexDrive+Rclone (PlexDrive mounts via Google and a local crypt remote to decrypt). This works very well. I would love to simplify the setup.

According to the mount settings, I should be storing 100M of chunks from each open file, but it sure doesn’t seem that way. a 720p video that’s about 800MB in size, I should have 15% of the file stored in cache within a few seconds, but that is not happening (or at least that I can see from what PlexPy is reporting).

@remusb
Copy link
Collaborator

remusb commented Nov 29, 2017

Just to confirm: you were using gdrive for that test no?

I'm curious about Rclone would close the file handle and re-open the file handle during an active stream. This can bring the speed down as it would essentially clear the RAM caching every time and constantly start from 0. How often did that happened? A ballpark number is fine, no need to redo the test

@jrarseneau
Copy link
Author

Yes using gdrive..

I can’t recreate the issue, it may only happen with multiple streams overloading something? I tried recreating it this morning and couldn’t. It appears that it was doing it every minute or so, but I can’t be certain.

Would the setting for —cache-chunk-no-memory help at all in this situation? The machine is an i7-7700 with 32GB of ram.

@jrarseneau
Copy link
Author

jrarseneau commented Nov 29, 2017

Seeing a lot of this.. The retry is set to default 3, I’m not sure why it’s running out of data (it’s downloaded 1G, but the stream isn’t even close to being at that point in the playback):

2017/11/29 12:28:11 DEBUG : o9gkbkhbpaq0ceh1cm21f6ln38/fmume0ps7drvha6q7chnjjj9j6qjjeu57aqvtv03c425mr2ch8v0/vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q: 0: chunk retry storage: 0
2017/11/29 12:28:12 DEBUG : worker-5 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 30M
2017/11/29 12:28:12 DEBUG : worker-1 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 10M
2017/11/29 12:28:12 DEBUG : o9gkbkhbpaq0ceh1cm21f6ln38/fmume0ps7drvha6q7chnjjj9j6qjjeu57aqvtv03c425mr2ch8v0/vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q: 0: chunk retry storage: 1
2017/11/29 12:28:12 DEBUG : worker-4 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 0
2017/11/29 12:28:12 DEBUG : worker-3 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 60M
2017/11/29 12:28:12 DEBUG : worker-2 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 20M
2017/11/29 12:28:12 DEBUG : worker-9 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 80M
2017/11/29 12:28:12 DEBUG : worker-6 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 90M
2017/11/29 12:28:13 DEBUG : worker-8 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 70M
2017/11/29 12:28:13 DEBUG : worker-0 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 50M
2017/11/29 12:28:13 DEBUG : worker-7 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 40M
2017/11/29 12:28:13 DEBUG : o9gkbkhbpaq0ceh1cm21f6ln38/fmume0ps7drvha6q7chnjjj9j6qjjeu57aqvtv03c425mr2ch8v0/vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q: 7035944960: chunk retry storage: 0
2017/11/29 12:28:13 DEBUG : worker-1 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 6.553G
2017/11/29 12:28:14 DEBUG : worker-5 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 6.543G
2017/11/29 12:28:14 DEBUG : o9gkbkhbpaq0ceh1cm21f6ln38/fmume0ps7drvha6q7chnjjj9j6qjjeu57aqvtv03c425mr2ch8v0/vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q: 104857600: chunk retry storage: 0
2017/11/29 12:28:15 DEBUG : worker-4 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 100M
2017/11/29 12:28:15 DEBUG : worker-8 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 110M
2017/11/29 12:28:15 DEBUG : worker-7 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 170M
2017/11/29 12:28:15 DEBUG : worker-2 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 120M
2017/11/29 12:28:15 DEBUG : o9gkbkhbpaq0ceh1cm21f6ln38/fmume0ps7drvha6q7chnjjj9j6qjjeu57aqvtv03c425mr2ch8v0/vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q: 136314880: chunk retry storage: 0
2017/11/29 12:28:15 DEBUG : worker-6 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 150M
2017/11/29 12:28:15 DEBUG : worker-5 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 160M
2017/11/29 12:28:15 DEBUG : worker-9 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 130M
2017/11/29 12:28:15 DEBUG : worker-1 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 140M
2017/11/29 12:28:16 DEBUG : worker-3 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 190M
2017/11/29 12:28:16 DEBUG : worker-4 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 200M
2017/11/29 12:28:16 DEBUG : worker-7 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 220M
2017/11/29 12:28:16 DEBUG : worker-0 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 180M
2017/11/29 12:28:16 DEBUG : worker-8 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 210M
2017/11/29 12:28:16 DEBUG : o9gkbkhbpaq0ceh1cm21f6ln38/fmume0ps7drvha6q7chnjjj9j6qjjeu57aqvtv03c425mr2ch8v0/vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q: 241172480: chunk retry storage: 0
2017/11/29 12:28:17 DEBUG : worker-2 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 230M
2017/11/29 12:28:17 DEBUG : worker-5 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 250M
2017/11/29 12:28:17 DEBUG : worker-6 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 240M
2017/11/29 12:28:17 DEBUG : worker-1 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 270M
2017/11/29 12:28:17 DEBUG : worker-8 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 320M
2017/11/29 12:28:17 DEBUG : o9gkbkhbpaq0ceh1cm21f6ln38/fmume0ps7drvha6q7chnjjj9j6qjjeu57aqvtv03c425mr2ch8v0/vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q: 272629760: chunk retry storage: 0
2017/11/29 12:28:18 DEBUG : worker-0 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 310M
2017/11/29 12:28:18 DEBUG : worker-4 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 290M
2017/11/29 12:28:18 DEBUG : worker-7 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 300M
2017/11/29 12:28:18 DEBUG : worker-6 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 350M
2017/11/29 12:28:18 DEBUG : worker-2 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 330M
2017/11/29 12:28:18 DEBUG : worker-5 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 340M
2017/11/29 12:28:18 DEBUG : o9gkbkhbpaq0ceh1cm21f6ln38/fmume0ps7drvha6q7chnjjj9j6qjjeu57aqvtv03c425mr2ch8v0/vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q: 272629760: chunk retry storage: 1
2017/11/29 12:28:19 DEBUG : o9gkbkhbpaq0ceh1cm21f6ln38/fmume0ps7drvha6q7chnjjj9j6qjjeu57aqvtv03c425mr2ch8v0/vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q: 272629760: chunk retry storage: 2
2017/11/29 12:28:23 DEBUG : worker-9 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 260M
2017/11/29 12:28:25 DEBUG : o9gkbkhbpaq0ceh1cm21f6ln38/fmume0ps7drvha6q7chnjjj9j6qjjeu57aqvtv03c425mr2ch8v0/vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q: 293601280: chunk retry storage: 0
2017/11/29 12:28:25 DEBUG : worker-3 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 280M
2017/11/29 12:28:25 DEBUG : worker-0 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 280M
2017/11/29 12:28:25 DEBUG : worker-4 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 360M
2017/11/29 12:28:25 DEBUG : worker-5 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 370M
2017/11/29 12:28:26 DEBUG : o9gkbkhbpaq0ceh1cm21f6ln38/fmume0ps7drvha6q7chnjjj9j6qjjeu57aqvtv03c425mr2ch8v0/vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q: 398458880: chunk retry storage: 0
2017/11/29 12:28:26 DEBUG : worker-7 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 380M
2017/11/29 12:28:27 DEBUG : worker-2 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 390M
2017/11/29 12:28:27 DEBUG : worker-6 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 420M
2017/11/29 12:28:27 DEBUG : worker-9 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 410M
2017/11/29 12:28:27 DEBUG : worker-5 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 470M
2017/11/29 12:28:27 DEBUG : o9gkbkhbpaq0ceh1cm21f6ln38/fmume0ps7drvha6q7chnjjj9j6qjjeu57aqvtv03c425mr2ch8v0/vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q: 419430400: chunk retry storage: 0
2017/11/29 12:28:27 DEBUG : worker-1 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 400M
2017/11/29 12:28:27 DEBUG : worker-8 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 430M
2017/11/29 12:28:27 DEBUG : worker-0 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 450M
2017/11/29 12:28:27 DEBUG : worker-7 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 480M
2017/11/29 12:28:27 DEBUG : worker-2 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 490M
2017/11/29 12:28:28 DEBUG : worker-4 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 460M
2017/11/29 12:28:28 DEBUG : o9gkbkhbpaq0ceh1cm21f6ln38/fmume0ps7drvha6q7chnjjj9j6qjjeu57aqvtv03c425mr2ch8v0/vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q: 461373440: chunk retry storage: 0
2017/11/29 12:28:28 DEBUG : worker-6 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 500M
2017/11/29 12:28:29 DEBUG : worker-9 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 510M
2017/11/29 12:28:29 DEBUG : worker-1 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 530M
2017/11/29 12:28:29 DEBUG : worker-5 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 520M
2017/11/29 12:28:29 DEBUG : o9gkbkhbpaq0ceh1cm21f6ln38/fmume0ps7drvha6q7chnjjj9j6qjjeu57aqvtv03c425mr2ch8v0/vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q: 461373440: chunk retry storage: 1
2017/11/29 12:28:30 DEBUG : o9gkbkhbpaq0ceh1cm21f6ln38/fmume0ps7drvha6q7chnjjj9j6qjjeu57aqvtv03c425mr2ch8v0/vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q: 461373440: chunk retry storage: 2
2017/11/29 12:28:35 DEBUG : o9gkbkhbpaq0ceh1cm21f6ln38/fmume0ps7drvha6q7chnjjj9j6qjjeu57aqvtv03c425mr2ch8v0/vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q: 461373440: chunk retry storage: 0
2017/11/29 12:28:36 DEBUG : worker-6 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 440M
2017/11/29 12:28:37 DEBUG : o9gkbkhbpaq0ceh1cm21f6ln38/fmume0ps7drvha6q7chnjjj9j6qjjeu57aqvtv03c425mr2ch8v0/vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q: 566231040: chunk retry storage: 0
2017/11/29 12:28:37 DEBUG : worker-0 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 540M
2017/11/29 12:28:37 DEBUG : worker-5 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 550M
2017/11/29 12:28:37 DEBUG : worker-1 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 560M
2017/11/29 12:28:37 DEBUG : worker-8 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 580M
2017/11/29 12:28:38 DEBUG : worker-4 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 590M
2017/11/29 12:28:38 DEBUG : o9gkbkhbpaq0ceh1cm21f6ln38/fmume0ps7drvha6q7chnjjj9j6qjjeu57aqvtv03c425mr2ch8v0/vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q: 597688320: chunk retry storage: 0
2017/11/29 12:28:38 DEBUG : worker-9 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 600M
2017/11/29 12:28:38 DEBUG : worker-2 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 570M
2017/11/29 12:28:38 DEBUG : worker-3 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 610M
2017/11/29 12:28:38 DEBUG : worker-6 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 630M
2017/11/29 12:28:38 DEBUG : worker-7 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 620M
2017/11/29 12:28:38 DEBUG : worker-0 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 640M
2017/11/29 12:28:38 DEBUG : worker-5 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 650M
2017/11/29 12:28:38 DEBUG : worker-1 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 660M
2017/11/29 12:28:39 DEBUG : o9gkbkhbpaq0ceh1cm21f6ln38/fmume0ps7drvha6q7chnjjj9j6qjjeu57aqvtv03c425mr2ch8v0/vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q: 702545920: chunk retry storage: 0
2017/11/29 12:28:39 DEBUG : worker-8 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 670M
2017/11/29 12:28:39 DEBUG : worker-4 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 680M
2017/11/29 12:28:40 DEBUG : worker-3 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 440M
2017/11/29 12:28:40 DEBUG : worker-9 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 690M
2017/11/29 12:28:40 DEBUG : worker-5 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 750M
2017/11/29 12:28:40 DEBUG : o9gkbkhbpaq0ceh1cm21f6ln38/fmume0ps7drvha6q7chnjjj9j6qjjeu57aqvtv03c425mr2ch8v0/vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q: 734003200: chunk retry storage: 0
2017/11/29 12:28:40 DEBUG : worker-2 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 700M
2017/11/29 12:28:40 DEBUG : worker-3 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 710M
2017/11/29 12:28:40 DEBUG : worker-1 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 760M
2017/11/29 12:28:40 DEBUG : worker-0 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 740M
2017/11/29 12:28:40 DEBUG : worker-8 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 770M
2017/11/29 12:28:40 DEBUG : worker-6 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 720M
2017/11/29 12:28:40 DEBUG : worker-4 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 780M
2017/11/29 12:28:40 DEBUG : worker-9 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 790M
2017/11/29 12:28:41 DEBUG : worker-7 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 730M
2017/11/29 12:28:41 DEBUG : worker-5 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 800M
2017/11/29 12:28:42 DEBUG : worker-2 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 810M
2017/11/29 12:28:42 DEBUG : worker-3 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 820M
2017/11/29 12:28:42 DEBUG : worker-1 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 830M
2017/11/29 12:28:42 DEBUG : worker-0 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 840M
2017/11/29 12:28:42 DEBUG : worker-4 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 870M
2017/11/29 12:28:42 DEBUG : worker-9 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 880M
2017/11/29 12:28:42 DEBUG : worker-7 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 890M
2017/11/29 12:28:42 DEBUG : worker-8 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 850M
2017/11/29 12:28:43 DEBUG : worker-6 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 860M
2017/11/29 12:28:43 DEBUG : worker-0 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 900M
2017/11/29 12:28:43 DEBUG : worker-1 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 910M
2017/11/29 12:28:43 DEBUG : worker-3 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 920M
2017/11/29 12:28:44 DEBUG : worker-2 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 930M
2017/11/29 12:28:44 DEBUG : worker-4 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 940M
2017/11/29 12:28:44 DEBUG : worker-5 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 950M
2017/11/29 12:28:45 DEBUG : worker-7 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 970M
2017/11/29 12:28:45 DEBUG : worker-9 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 960M
2017/11/29 12:28:45 DEBUG : worker-8 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 980M
2017/11/29 12:28:45 DEBUG : worker-6 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 990M
2017/11/29 12:28:46 DEBUG : worker-3 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 1020M
2017/11/29 12:28:46 DEBUG : worker-2 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 1.006G
2017/11/29 12:28:46 DEBUG : worker-1 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 1010M
2017/11/29 12:28:46 DEBUG : worker-5 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 1.025G
2017/11/29 12:28:47 DEBUG : o9gkbkhbpaq0ceh1cm21f6ln38/fmume0ps7drvha6q7chnjjj9j6qjjeu57aqvtv03c425mr2ch8v0/vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q: 1048576000: chunk retry storage: 0
2017/11/29 12:28:47 DEBUG : worker-4 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 1.016G
2017/11/29 12:28:47 DEBUG : worker-7 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 1.035G
2017/11/29 12:28:48 DEBUG : o9gkbkhbpaq0ceh1cm21f6ln38/fmume0ps7drvha6q7chnjjj9j6qjjeu57aqvtv03c425mr2ch8v0/vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q: 1048576000: chunk retry storage: 1
2017/11/29 12:28:49 DEBUG : o9gkbkhbpaq0ceh1cm21f6ln38/fmume0ps7drvha6q7chnjjj9j6qjjeu57aqvtv03c425mr2ch8v0/vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q: 1048576000: chunk retry storage: 2
2017/11/29 12:28:52 DEBUG : worker-9 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 1.045G
2017/11/29 12:28:52 DEBUG : worker-6 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 1.064G
2017/11/29 12:28:52 DEBUG : worker-0 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 1000M
2017/11/29 12:28:53 DEBUG : worker-8 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 1.055G
2017/11/29 12:28:55 DEBUG : worker-5 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 1.084G
2017/11/29 12:28:55 DEBUG : o9gkbkhbpaq0ceh1cm21f6ln38/fmume0ps7drvha6q7chnjjj9j6qjjeu57aqvtv03c425mr2ch8v0/vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q: 1153433600: chunk retry storage: 0
2017/11/29 12:28:55 DEBUG : worker-7 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 1.123G
2017/11/29 12:28:55 DEBUG : worker-1 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 1.152G
2017/11/29 12:28:55 DEBUG : worker-6 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 1.104G
2017/11/29 12:28:55 DEBUG : worker-9 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 1.074G
2017/11/29 12:28:55 DEBUG : worker-2 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 1.143G
2017/11/29 12:28:55 DEBUG : worker-8 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 1.094G
2017/11/29 12:28:55 DEBUG : worker-4 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 1.113G
2017/11/29 12:28:55 DEBUG : worker-3 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 1.162G
2017/11/29 12:28:56 DEBUG : worker-5 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 1.172G
2017/11/29 12:28:57 DEBUG : worker-3 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 1.182G
2017/11/29 12:28:57 DEBUG : worker-2 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 1.191G
2017/11/29 12:29:06 DEBUG : o9gkbkhbpaq0ceh1cm21f6ln38/fmume0ps7drvha6q7chnjjj9j6qjjeu57aqvtv03c425mr2ch8v0/vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q: 1216348160: chunk retry storage: 0
2017/11/29 12:29:06 DEBUG : worker-9 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 1.201G
2017/11/29 12:29:07 DEBUG : worker-8 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 1.211G
2017/11/29 12:29:07 DEBUG : worker-4 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 1.221G
2017/11/29 12:29:07 DEBUG : o9gkbkhbpaq0ceh1cm21f6ln38/fmume0ps7drvha6q7chnjjj9j6qjjeu57aqvtv03c425mr2ch8v0/vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q: 1216348160: chunk retry storage: 1
2017/11/29 12:29:08 DEBUG : o9gkbkhbpaq0ceh1cm21f6ln38/fmume0ps7drvha6q7chnjjj9j6qjjeu57aqvtv03c425mr2ch8v0/vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q: 1216348160: chunk retry storage: 2
2017/11/29 12:29:12 DEBUG : worker-0 <vj4omu2mec1tuqk4tp0r55au7fm41gp10hhaquhhku38f9nqc3vaamjggkm3h72hqb7vspnk47k3q>: downloaded chunk 1.133G

@remusb
Copy link
Collaborator

remusb commented Nov 29, 2017

—cache-chunk-no-memory will actually slow things down for you. with that much RAM I would not add that flag and use as many workers as I can until I'm near the CPU throttle and GDrive limits.

chunk retry storage: 0/1/2 - with --buffer-size=200M you're actually running a double buffering caching there. One is on the low level of rclone which will go and fetch 200M of data from cache which in turn will fetch data in 10M chunks from the source.
Your stream could be way behind it and won't suffer from these retries cause they'll be exchanges between the 2 caches.

If you're still interested in testing stuff: try with --buffer-size=0. I need to sync up with @ncw on how to synchronize these caching patterns better. I think they work great towards a better quality in reading but it might cause some penalties on performance and hardware stress.

@jrarseneau
Copy link
Author

So I had time to do some testing and I’m afraid the results aren’t good with the buffer-size=0 setting

Here is my test, it’s quite simple. I start a transcode playback of 2 movies simultaneously from Plex and go look at PlexPy to see what the transcode speed is. I believe the transcode speed is a good indicator of performance of the underlying fuse system (rclone/PlexDrive) ability to pull data from Google in an efficient / fast way.

With rclone cache:
Transcode speed of both was between 0.5x and 2x. This isn’t good, anything under 1x will surely result in buffering.

With PlexDrive:
Transcode speed of both was over 10x

What does this mean? Well, assuming a movie is 2 hours, a transcode speed of 10x means it could transcode / pull data from the remote at 10x the playback speed, so 2 hours would mean it could pull the full movie and transcode it in about 12 minutes (this is PlexDrive).

With rclone, at 2x, it would take 60 minutes to pull the data and transcode it.

I’m not sure if this is cache related or how rclone pulls data from a remote. I don’t have this problem in my PlexDrive setup and rclone is involved there, but only decrypting a local folder that PlexDrive presents. PlexDrive is the one that talks to Google.

I’m open to do more testing, I’ve recently setup a sandbox Plex Server to test this with so I don’t affect production and can reproduce these results pretty consistently. PlexDrive is orders of magnitude faster at pulling data from Google and presenting it to the system to use than rclone is.

@remusb
Copy link
Collaborator

remusb commented Nov 30, 2017

Thanks @jrarseneau I'll surely try testing with PlexPy. You have to understand this is still under experimentation and first release to the general public. Your data does help though as I didn't had a proper way to benchmark this scenario until now 👍

I'm not sure what to do with this issue. The functionality you first asked about is there, the performance is under what you would expect for sure but I'm currently focusing on getting cache's quirks fixed rather than improving performance.

I'm happy to leave it open but we'll need to rename this and relabel it as performance related.

@jrarseneau
Copy link
Author

@remusb

I’m adding some new information..

I notice that the cache is filling up very quick, for example, I started an episode where the file is 1.5GB and within about 5 seconds, the cache had 280MB of data (so 18%), but the transcode speed in PlexPy listed it at like 2x. The same file played on PlexDrive shows a >8x transcode speed.

It’s almost as though the raw file data is downloaded from the remote (Google in this case) very rapidly by the cache workers, but isn’t handed off fast enough to the file system/Plex (if that makes any sense), almost as though Plex being able to pull data from the rclone mount cache is being throttled or something.

Are you able to think of any reason why this would be?

I’m also ok relabelling this. I think you’re going to get a lot of people who expected the cache to be a panacea of using rclone+mount with Google Drive (which many of us fled to PlexDrive due to consistent bans using rclone).

Thanks!

@remusb remusb changed the title [Feature Request] Cache: add chunk read ahead [Feature Request] Improve reading speed Dec 5, 2017
@remusb
Copy link
Collaborator

remusb commented Dec 5, 2017

Hi @jrarseneau

I made a few tests for myself while working on some improvements. So it's news to me that transcode requires the full file before playback so I'll personally just stay away from that for my binging times.
Still, it's a good indication of what you saw. There's no reason why rclone would pull data slower than any other consumer of the bandwidth. When I started writing cache I was doing some tests with dd on a local FS and the results were pretty good. But too many things changed since then and I'll rerun the same type of tests to be sure nothing is throttling data in rclone.

On another note: I stripped a lot of stuff that was likely slowing things down. I have #1884 PR open that I first need merged but straight after that I'll push some changes to cache which will definitely help with the speed (I'll ping back here after a beta is up). Not sure if it's the same level of speed you're hoping to see but one can hope.

@jrarseneau
Copy link
Author

Thanks @remusb

i don’t believe the Plex transcoder needs the entire file before starting playback. I know this because of the extensive testing I’ve done.

Playback for instance with PlexDrive starts within 3-5 seconds, no matter the size of the file. This wouldn’t be possible if it had to download the entire file first. Rclone is very erratic (even before cache came). Start times can be sub-5 seconds or can be 20-30 seconds. I’ve never known what causes this but my beat guess is that rclone tends to hit Google’s rate limiting more than PlexDrive, so if there’s a rate limiting warning during start of playback this could slow things down.

One observation I made last night that I forgot to mention was that on initial read, the cache fills up to about 200MB, but stops there. I’m wondering why the whole file isn’t downloaded? It’s possible that the transcoder is being starved and has to wait until rclone gives it more data.

Would there be a way for testing purposes that I could get the cache subsystem to download the entire file on read?

@remusb
Copy link
Collaborator

remusb commented Dec 5, 2017

@jrarseneau it stops at 200 MB because --buffer-size=200M

The thing is, that buffer is too hungry and you're asking for too much. Reading through cache is best done at demand and is more cost efficient that way. When I say cost I don't mean just bandwidth cost but a bunch of things turn out to cost too much (rps, h/w resources, bandwidth etc)

That's why I recommended to set to 0 and let just cache deal with the traffic. It will not go too far ahead than where you currently are at with the reading. The formula is actually workers * chunk size

I would wait for the next beta for the next rounds of tests. You will definitely see some improvements in this area.

@jrarseneau
Copy link
Author

@remusb

I forgot to add that my buffer-size is 0 for the past tests.

Let me know when the beta is available.

@remusb
Copy link
Collaborator

remusb commented Dec 5, 2017

Hm, ok. It doesn't sound like it gets applied though. I see the help page says --buffer-size 200M instead of --buffer-size=200M. Not sure if there's any difference in them though.

@remusb
Copy link
Collaborator

remusb commented Dec 9, 2017

https://beta.rclone.org/v1.38-217-gb48b5373/

Here's the latest beta I mentioned. Since you're using Plex, it would be great if you use the integration and configure rclone to talk with Plex to manage load during scans and playback. It's in the doc: https://github.com/ncw/rclone/blob/master/docs/content/cache.md

@jrarseneau
Copy link
Author

Anecdotal evidence, needs more testing but this build is much better.

Two streams (movies which are longer) and the transcode speed was over 4 and climbed to as high as 7-8 until it throttled due to Plex having enough data to throttle the transcoder.

I will do more testing tomorrow with 6+ streams to see how it holds up.

@remusb
Copy link
Collaborator

remusb commented Dec 10, 2017

If you are using the Plex integration first few seconds will work on a single worker until it successfully confirms with Plex that it wants a playback so it's natural to see a slower ramp up.

But I feel this to be an acceptable trade-off considering the environment we're running in. After the 2 are in an agreement that a playback is happening it will start up all the workers you configured and Plex should start to get data really fast.

But the real improvement comes from a bug I found where the reading of the workers was constantly interrupted and the cache advantages were getting lost.

@jrarseneau
Copy link
Author

Perfect, I think it's an acceptable trade-off as well, but I'll have to confirm when I have 7+ streams going (that's my typical usage on a busy night with family/friends) and PlexDrive handles this fine.

Do you know approximately how much memory each cache worker consumes?

@remusb
Copy link
Collaborator

remusb commented Dec 10, 2017

It's something like workers * chunk size per handle. So a 5MB chunk * 4 workers will result in a total of 20 MB of RAM per stream. You should also add a bit of overhead as it's not just the raw bytes of the file in there but as a general idea, that's the formula to go on.

@remusb
Copy link
Collaborator

remusb commented Dec 18, 2017

@jrarseneau
Can we close this for now? I'm hoping we have an acceptable performance for 1.39.

@jrarseneau
Copy link
Author

@remusb
Let me stress test this tonight now that the missing files bug is fixed as I wasn't using rclone for the past 3-4 days while waiting for that.

I think if we can do 4+ transcode streams concurrently, then performance would be sufficient for a release (IMO).

@remusb remusb added this to the v1.39 milestone Dec 19, 2017
@jrarseneau
Copy link
Author

I had 7 transcodes going and the transcode speed dropped to about 2.5x which is still sufficient to maintain smooth playback.

I think in terms of performance this is good enough to release.

Closing...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants