Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The idea of ​​the starting time of 'dl_limit_start' #2406

Closed
liupeidong0620 opened this issue Mar 20, 2018 · 11 comments
Closed

The idea of ​​the starting time of 'dl_limit_start' #2406

liupeidong0620 opened this issue Mar 20, 2018 · 11 comments

Comments

@liupeidong0620
Copy link

Possible positions of 'dl_limit_start' are as follows:

|_________________________|________________________|__________________|__________________________|____________|
(1) dns resolution time  (2)  tcp connection time (3)   tls time     (4) Time to First Byte(TTFB)             ok

the reason:

'dl_limit_start(curl 7.60.0-DEV)' This parameter will be updated after three seconds.The initial value is set at (1)

possible problems:

(1)(2)(3) may waste a lot of time,the actual download speed is likely to become very fast (Within three seconds may be unrestricted).


idea:

  • I think the starting time point is more reasonable from (4).(The real speed limit will not have any other influence)
  • A better way, users can set the starting point.(The above four positions can be arbitrarily selected)

Just my personal thoughts.

@bagder
Copy link
Member

bagder commented Mar 20, 2018

For you, now, sure. For someone else, another point will make more sense. Rate limiting cannot be an exact science so we just need to find the best "middle ground" where we think we can work suitably for the majority.

I don't believe in letting users set the starting point as virtually nobody would know where to set it.

@bagder
Copy link
Member

bagder commented Mar 21, 2018

So how about we simply change the minimum period to be 1 millisecond ?

#define MIN_RATE_LIMIT_PERIOD 1

@liupeidong0620
Copy link
Author

very bad!

Test Results

#define MIN_RATE_LIMIT_PERIOD 1
[root@california src]# time ./curl http://linux.mirrors.es.net/centos/7.4.1708/os/x86_64/Packages/dyninst-testsuite-9.3.1-1.el7.x86_64.rpm -O  --limit-rate 15M
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 16.2M  100 16.2M    0     0  8256k      0  0:00:02  0:00:02 --:--:-- 8256k

real	0m2.060s
user	0m0.017s
sys	0m0.036s
#define MIN_RATE_LIMIT_PERIOD 3000
[root@california src]# time ./curl http://linux.mirrors.es.net/centos/7.4.1708/os/x86_64/Packages/dyninst-testsuite-9.3.1-1.el7.x86_64.rpm -O  --limit-rate 15M
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 16.2M  100 16.2M    0     0  14.9M      0  0:00:01  0:00:01 --:--:-- 15.0M

real	0m1.162s
user	0m0.039s
sys	0m0.039s

@bagder
Copy link
Member

bagder commented Mar 22, 2018

I would say that transferring 16MB and limiting the speed to 15M is a very bad test though, You'll get very few samples so depending on the early speed it will change a lot. I presume that in your case you had a very high initial speed so it delayed it a bit too much in that initial limit and then it didn't have time to correct that.

@liupeidong0620
Copy link
Author

Actual download speed:

[root@california src]# time ./curl -o /dev/null http://linux.mirrors.es.net/centos/7.4.1708/os/x86_64/images/boot.iso
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  422M  100  422M    0     0  70.2M      0  0:00:06  0:00:06 --:--:-- 71.7M

real	0m6.018s
user	0m0.555s
sys	0m1.650s

Test

#define MIN_RATE_LIMIT_PERIOD 1
[root@california src]# time ./curl -o /dev/null http://linux.mirrors.es.net/centos/7.4.1708/os/x86_64/images/boot.iso --limit-rate 10M
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  422M  100  422M    0     0  8108k      0  0:00:53  0:00:53 --:--:-- 8102k

real	0m53.305s
user	0m0.206s
sys	0m0.207s
[root@california src]# time ./curl -o /dev/null http://linux.mirrors.es.net/centos/7.4.1708/os/x86_64/images/boot.iso --limit-rate 20M
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  422M  100  422M    0     0  23.1M      0  0:00:18  0:00:18 --:--:-- 23.1M

real	0m18.255s
user	0m0.200s
sys	0m0.143s

#define MIN_RATE_LIMIT_PERIOD 3000
[root@california src]# time ./curl -o /dev/null http://linux.mirrors.es.net/centos/7.4.1708/os/x86_64/images/boot.iso --limit-rate 10M
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  422M  100  422M    0     0   9.9M      0  0:00:42  0:00:42 --:--:--  9.9M

real	0m42.304s
user	0m0.175s
sys	0m0.259s
[root@california src]# time ./curl -o /dev/null http://linux.mirrors.es.net/centos/7.4.1708/os/x86_64/images/boot.iso --limit-rate 20M
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  422M  100  422M    0     0  19.9M      0  0:00:21  0:00:21 --:--:-- 19.9M

real	0m21.170s
user	0m0.108s
sys	0m0.273s

@liupeidong0620
Copy link
Author

How is this test?

@bagder
Copy link
Member

bagder commented Mar 22, 2018

That clearly shows how 1ms is worse than 3000ms, yes.

Ben Greear on the mailing list however had serious issues with 3000:

I am downloading a 20MB file using ftp, with a program compiled against lib-curl. With this patch,
download spikes to 150Mbps (or probably higher, depending on resolution), and
then has a long pause

I don't know what fixes that are needed, but I would imagine that Ben's transfer speed changes differently than yours. Possibly his starts out a bit slow so the initial limiting is far too "lenient", which would be a reason to not wait 3000ms until we check it again.

@liupeidong0620
Copy link
Author

I can't reproduce.Can you give a detailed test method?

@bagder
Copy link
Member

bagder commented Mar 25, 2018

No, because it was Ben who had the trouble, not me.

@bagder
Copy link
Member

bagder commented May 4, 2018

Any further suggestions or ideas? If not, I'm leaning on simply closing this...

@bagder
Copy link
Member

bagder commented May 31, 2018

No response, closing.

@bagder bagder closed this as completed May 31, 2018
@lock lock bot locked as resolved and limited conversation to collaborators Aug 29, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Development

No branches or pull requests

2 participants