-
Notifications
You must be signed in to change notification settings - Fork 2.1k
fix remaining time estimation for public link uploads #37053
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Thanks for opening this pull request! The maintainers of this repository would appreciate it if you would create a changelog item based on your changes. |
Codecov Report
@@ Coverage Diff @@
## master #37053 +/- ##
=========================================
Coverage 64.75% 64.75%
Complexity 19135 19135
=========================================
Files 1270 1270
Lines 74909 74909
Branches 1329 1329
=========================================
Hits 48511 48511
Misses 26007 26007
Partials 391 391
Continue to review full report at Codecov.
|
bd6dd18
to
4c29a21
Compare
Couple of questions:
Do we need this? I think this "sugarcoating" not only it isn't accurate but also makes the code more complex. The other question is if we're handling a possible "0" bitrate, mainly to not crash something. |
4c29a21
to
b464c26
Compare
Yes, I agree on the complexity part. However, now I tried calculating with average bitrates, I even implemented with different weighted averages to reduce the effect of a single bitrate, the buffer implementation seems slightly more performant. In addition, I found the commit of this buffer algorithm with git blame. The committer claims that he followed the same logic with the desktop client. IMHO, it can stay.
Yes, we have controls for 0 or negative bitrate case. The code ignores these bitrates. |
When I have a local ownCloud and upload a ~2GB ISO file locally it actually takes about 20 seconds. Before this change the progress bar would say "a few seconds" and then after a few seconds it would change to saying "2 minutes" "3 minutes"... and then near the end go back to estimating "a few seconds". After this change the progress bar says "a few seconds" all the time (and on some runs might say "1 minute" early in the upload if the upload is going a bit slow). On a 10G file it says "2 minutes" pretty consistently, sometimes estimating 3 or 4 minutes (I think the file in system in the background slows down every now and then as it writes out the uploading file). Without the change, it seems to always say "1 minute" or "a few seconds" - i.e. it was too optimistic. For me. this change is "a good thing". It does need to be tried in an environment with "normal" internet speeds and a file size that will take 10 minutes or more to upload, to get a real idea of how "steady" the time estimate is. |
I made some tests by limiting the internet speed of my virtual machine. As far as I see, all test results are more stable than before. |
@karakayasemi check if https://github.com/owncloud/core/pull/36814/files#diff-87d6d7ffd7622e59f65a6e6be8de6a56R53-R59 could be useful here. It's mainly to adjust the thresholds in order to show more accurate info instead of "a few seconds" |
@jvillafanez thank you for the suggestion. I guess increasing the precision needs a pm decision. Let's merge it as it is by only fixing its errors. |
@karakayasemi Do we refer in this ticket to this time estimation?: If so, it looks like estimation moves randomly |
@davitol Can you compare the new code performance with the old one? If the first several calculated bit rates are slower in your instance it can led this behavior. We are also using an smoothing algorithm in here that uses last 20 consecutive packages average for calculating remaining time. The first 20 packages should be calculated quickly, but any fluctuated package can affect the average significantly at the beginning of upload. |
@karakayasemi Can you clarify if we need another fix? If yes, we would need it on the |
I do not think, we need a new fix. Since bitrate fluctation is very common problem, the time estimation is very tricky. However, the new implementation has better performance in my tests. Also @phil-davis stated similar comment on the above #37053 (comment) |
When I tested this in a local git clone and uploaded a few GB just on my |
I'm more inclined to provide a percentage based on the mount of data transferred because it should be more foreseeable and more accurate, but that's a different thing to be considered in the future. I'm not saying to change anything (it's out of scope anyway), but I think it's something to take into account for future developments. |
Yes, i already did when i tested it and I did not see relevant differences. But if you made some tests as you said and found it more stable, and also went fine for @phil-davis I'm ok to keep it like it is now for this version and let's see if @jvillafanez will be worthy for the future. Thank you all for the feedback. |
Description
Currently, remaining time estimation is relies on
new Date().getMilliseconds();
difference between the twofileupload.fileuploadprogressall
events. When time second change there is no meaning in this milliseconds difference. We are using Jquery-file-upload package in file-upload and the package is already providing bitrate for upload. With this PR the code will use this bitrate.In addition, to provide a smooth experience, the code is using a buffer mechanism. But, in the first bufferSize(20) events, since the buffer is empty, time estimation is wrong. The remaining time is rapidly increasing at the beginning. This problem resolved by dividing bufferTotal to the filled buffer size. Also, this buffer logic explained by adding a comment line.
Related Issue
Motivation and Context
Fixing bugs.
How Has This Been Tested?
Test 1:
Test 2:
I repeated tests with IE, Opera, Chrome and Firefox browsers. All of them look okay.
Types of changes
Checklist: