-
Notifications
You must be signed in to change notification settings - Fork 520
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unexpected out of order reads streaming with block size 32768 #26
Comments
could you upload the log somewhere? I ran some smaller tests and don't see any out of order logs.
indicates that read is requesting offset 5242880 when the expected offset is 5308416 (last byte read was 5308415). Which is very strange given that you are just using dd. |
Here's the log while running This is on Ubuntu 15.10 with goofyfs at commit c3de75e |
I don't have time to look into it in details today, but my initial hunch is that they are innocuous logging statements because they seem to happen at 5MB boundaries (which is the hardcoded range read size). |
I count almost double the number of HTTP requests being made when using 32KB reads, so I don't think it's innocuous. I've redone the tests at commit a8fe0ae reading only the first 33MB so I can paste full logs into the gist: https://gist.github.com/lrowe/275da6f2fc04bfa3bf2f A search for Fuse will read ahead by increasing read sizes up to a maximum of 128KB. In the dd bs=128K case there is no read ahead as the reads are aligned with the maximum read size. In the dd bs=32K case the first read is rounded up to 64KB and then subsequently to 128KB, so its reads are offset around the 5MB boundary. https://gist.github.com/lrowe/275da6f2fc04bfa3bf2f#file-_relevant-log It looks like there is a race condition because |
I think I fixed the problem but since I had problem reproducing it, could you double check for me? Thanks a lot for your investigations! |
Thanks! Updating to current master I see that the expected number of requests and the 'out of order' warnings are gone. |
Trying to dig into why gunzip was slow, I enabled the debug logging while reading a file:
When reading with a block size of 131072 (the same as cat) I see 311 responses logged with status 206 Partial Content.
When reading with a block size of 32768 (the same as gunzip --stdout) I see 621 responses logged with status 206 Partial Content.
The difference in requests seems to be accounted for by 310 log lines reporting:
The text was updated successfully, but these errors were encountered: