Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve speed limits, not based on average speeds anymore #971

Closed
wants to merge 1 commit into from

Commits on Aug 19, 2016

  1. Improve speed limits, not based on average speeds anymore

    Speed limits (from CURLOPT_MAX_RECV_SPEED_LARGE & CURLOPT_MAX_SEND_SPEED_LARGE)
    were applied simply by comparing limits with the cumulative average speed of the
    entire transfer; While this might work at times with good/constant connections,
    in other cases it can result to the limits simply being "ignored" for more than
    "short bursts" (as told in man page).
    
    Consider a download that goes on much slower than the limit for some time
    (because bandwidth is used elsewhere, server is slow, whatever the reason), then
    once things get better, curl would simply ignore the limit up until the average
    speed (since the beginning of the transfer) reached the limit.
    This could prove the limit useless to effectively avoid using the entire
    bandwidth (at least for quite some time).
    
    So instead, we now use a "moving starting point" as reference, and every time at
    least as much as the limit as been transferred, we can reset this starting point
    to the current position. This gets a good limiting effect that applies to the
    "current speed" with instant reactivity (in case of sudden speed burst).
    jjk-jacky committed Aug 19, 2016
    Copy the full SHA
    7242abe View commit details
    Browse the repository at this point in the history