Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.Sign up
GitHub is where the world builds software
Millions of developers and companies build, ship, and maintain their software on GitHub — the largest and most advanced development platform in the world.
curl: set a 100K buffer size by default #1446
Test command 'time curl http://localhost/80GB -so /dev/null' on a Debian
Before (middle performing run out 9):
After (middle performing run out 9)
real 0m26.356s (93.9%)
Test command 'time curl http://localhost/80GB -so /dev/null' on a Debian Linux. Before (middle performing run out 9): real 0m28.078s user 0m11.240s sys 0m12.876s After (middle performing run out 9) real 0m26.356s (93.9%) user 0m5.324s (47.4%) sys 0m8.368s (65.0%)
I suspect this is unnecessary and you should back it up with some typical real world data scenarios. Even if there's a very slight difference in a corner case iirc the user can already set up to a 500k buffer size via rate limiting.
For comparison I ran your 80GB test in Windows. There is
parent you branched from, 7474418:
Server: Apache/2.4.18 (Win64)
I think this is an easy fix that makes some improvements at a very low cost. And it doesn't touch the library which makes it even less risky. To me, the numbers I've already shown is reason enough: curl will use less CPU and system resources this way. Sure, for regular network transfers it will be bottle-necked by the bandwidth but being lightweight is also a good virtue.
But there are some actual network speed reasons too, as you mention. SFTP in particular will get a serious boost with this.
SFTP 4GB over localhost
new (larger buffer)
So, about 10% faster here.
SFTP 4GB over a 200ms latency link
(test run by adding latency to 'lo')
Transfer rate: 310KB/sec
Transfer rate: 1930KB/sec
(The ratio in this speed increase is almost identical to the ratio of the buffer increase...)
HTTP over a 200 ms latency link
As a comparison to see what it fares when we add latency to the mix and thus it isn't completely CPU bound anymore.
Virtually no speed difference, but slightly lower system load.