Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ability to limit bandwidth for S3 uploads/downloads #1090

Closed
jamesls opened this issue Jan 13, 2015 · 67 comments

Comments

@jamesls
Copy link
Member

commented Jan 13, 2015

Original from #1078, this is a feature request to add the ability for the aws s3 commands to limit the amount of bandwidth used for uploads and downloads.

In the referenced issue, it was specifically mentioned that some ISPs charge fees if you go above a specific mbps, so users need the ability to limit bandwidth.

I imagine this is something we'd only need to add to the aws s3 commands.

@AustinSnow

This comment has been minimized.

Copy link

commented Mar 29, 2015

Hello jamesis,
Could you provide a timeframe when the bandwidth limit could become available?
Thanks
austinsnow

@kjohnston

This comment has been minimized.

Copy link

commented Jun 1, 2015

👍

3 similar comments
@beauhoyt

This comment has been minimized.

Copy link

commented Jul 28, 2015

👍

@bhegazy

This comment has been minimized.

Copy link

commented Jul 30, 2015

👍

@seattledoug

This comment has been minimized.

Copy link

commented Jul 31, 2015

👍

@dsclassen

This comment has been minimized.

Copy link

commented Sep 22, 2015

@godefroi

This comment has been minimized.

Copy link

commented Oct 4, 2015

👍

3 similar comments
@rayterrill

This comment has been minimized.

Copy link

commented Oct 4, 2015

👍

@kazeburo

This comment has been minimized.

Copy link

commented Oct 5, 2015

👍

@isaoshimizu

This comment has been minimized.

Copy link

commented Oct 5, 2015

👍

@quiver

This comment has been minimized.

Copy link
Contributor

commented Oct 5, 2015

Under Unix-flavor systems, trickle comes in handy for ad-hoc throttling. trickle hooks socket-APIs using LD_PRELOAD and throttles bandwidth.

You can run commands something like

$ trickle -s -u {UPLOAD_LIMIT(KB/s)} command
$ trickle -s -u {UPLOAD_LIMIT(KB/s)} -d {DOWNLOAD_LIMIT(KB/s)} command

Built-in feature will be really useful, but given cross-platform nature of AWS-CLI, it can cost a lot to implement and maintain it.

@binaryorganic

This comment has been minimized.

Copy link

commented Oct 5, 2015

Trickle is specifically mentioned in issue #1078 which is linked to in the first comment here. The two (trickle and AWS-CLI) just don't play nice together in my experience.

@isp0000

This comment has been minimized.

Copy link

commented Oct 22, 2015

👍

1 similar comment
@l3rady

This comment has been minimized.

Copy link

commented Nov 12, 2015

👍

@andrefelipe

This comment has been minimized.

Copy link

commented Dec 2, 2015

+1

5 similar comments
@ddehghan

This comment has been minimized.

Copy link

commented Dec 31, 2015

+1

@joshpelz

This comment has been minimized.

Copy link

commented Jan 19, 2016

👍

@whiteadam

This comment has been minimized.

Copy link

commented Jan 19, 2016

+1

@mikeg0

This comment has been minimized.

Copy link

commented Feb 1, 2016

+1

@nhumphreys

This comment has been minimized.

Copy link

commented Feb 10, 2016

👍

@apeschar

This comment has been minimized.

Copy link

commented Mar 9, 2016

(Y)

@JulienChampseix

This comment has been minimized.

Copy link

commented Mar 25, 2016

👍

@ikoniaris

This comment has been minimized.

Copy link

commented Apr 8, 2016

👍 this is much needed!

@aegixx

This comment has been minimized.

Copy link

commented Apr 15, 2016

👍

@aflugge

This comment has been minimized.

Copy link

commented Jan 30, 2017

👍

@moses-moore-spafax

This comment has been minimized.

Copy link

commented Feb 3, 2017

On the one hand: much faster than s3cmd.
On the other hand: my hosting company automatically halted a server for using "suspiciously high amounts of bandwidth".
Someone suggested aws configure set default.s3.max_concurrent_requests $n where $n is less than 10. Not sure if that is enough; will investigate the trickle tool mentioned above.

@nikitasius

This comment has been minimized.

Copy link

commented Mar 23, 2017

👍

@cobaltjacket

This comment has been minimized.

Copy link

commented Mar 23, 2017

Over two years in, and this request is still outstanding. Is there a timeframe by which this could be implemented?

@hilyin

This comment has been minimized.

Copy link

commented Apr 6, 2017

👍

4 similar comments
@zouyixiong

This comment has been minimized.

Copy link

commented Jun 6, 2017

👍

@zouyixiong

This comment has been minimized.

Copy link

commented Jun 6, 2017

👍

@danielpfarmer

This comment has been minimized.

Copy link

commented Jun 9, 2017

👍🏿

@nullobject

This comment has been minimized.

Copy link

commented Jun 9, 2017

👍

@leonsmith

This comment has been minimized.

Copy link

commented Jun 14, 2017

Just nuked the internet in a shared office.
This would be a nice feature when you want to be kind to other people
👍

@pticyn

This comment has been minimized.

Copy link

commented Jun 15, 2017

👍

@markdavidburke

This comment has been minimized.

Copy link

commented Jun 15, 2017

you can use trickle -s -u 100 aws s3 sync . s3://examplebucket

@ikoniaris

This comment has been minimized.

Copy link

commented Jun 15, 2017

@sofuca does this work correctly though? There are many people that have tried trickle for this but the results were questionable. See #1078.

@markdavidburke

This comment has been minimized.

Copy link

commented Jun 16, 2017

@ikoniaris

Works perfectly for me.

The following command nukes the internet in the office, it's a 20Mb/s connection

aws s3 cp /foo s3://bar

And the following command uploads at a nice 8Mb/s

trickle -s -u 1000 aws s3 sync /foo s3://bar

Screen shot of the outside interface of the firewall I'm using
image

@mxins

This comment has been minimized.

Copy link

commented Jul 6, 2017

👍

@ctaperts

This comment has been minimized.

Copy link

commented Jul 30, 2017

Trickle and large s3 files will cause the trickle to crash

@wadejensen

This comment has been minimized.

Copy link

commented Aug 17, 2017

(y)

@ctaperts

This comment has been minimized.

Copy link

commented Aug 22, 2017

sorry, Trickle and large s3 files will cause the trickle to crash using boto3 with 10 concurrent(default settings) uploads, changing the concurrent uploads will resolve the issue. I need to add this in the boto3's github, thanks!

@bhicks-usa

This comment has been minimized.

Copy link

commented Sep 8, 2017

👍

So it's been over 2.5 years since this was opened. Is this request just being ignored?

@tantra35

This comment has been minimized.

Copy link

commented Oct 6, 2017

For us we use pv(https://linux.die.net/man/1/pv) in this maner:

/usr/bin/pv -q -L 20M $l_filepath | /usr/local/bin/aws s3 cp --region "us-east-1" - s3://<s3-bucket>/<path in s3 bucker>

This solution is not ideal(because it require additional support for filtering and recursion, we do it inside bash loop) but much better than trickle which in our case uses 100% of CPU, and behaves very unstable

Here our full usecase of pv(we limit upload speed to 20MB/s == 160Mbit/s)

for l_filepath in /logs/*.log-*; do
    l_filename=`basename $l_filepath`
    /usr/bin/pv -q -L 20M $l_filepath | /usr/local/bin/aws s3 cp --region "us-east-1" - s3://$S3BUCKET/${HOSTNAME}/$l_filename
    /bin/rm $l_filepath
done
@jonoaustin

This comment has been minimized.

Copy link

commented Oct 9, 2017

+1

Real life use case: Very large upload to S3 over DX, do not want to saturate the link and potentially impact production applications using the DX link.

@ischoonover

This comment has been minimized.

Copy link

commented Oct 25, 2017

throttle, trickle and pv all do not work for me on archlinux with the latest awscli from pip when uploading to a bucket. I have additionally set max_concurrent_connections for s3 in the awscli configuration to 1 with no difference made. This would be a much appreciated addition!

@tantra35

This comment has been minimized.

Copy link

commented Oct 26, 2017

@ischoonover seems that you don't pass --expected-size to aws cli when use it with pv, it very useful when you try to upload very big files

--expected-size (string) This argument specifies the expected size of a stream in terms of bytes. Note that this argument is needed only when a stream is being uploaded to s3 and the size is larger than 5GB. Failure to include this argument under these conditions may result in a failed upload due to too many parts in upload.
@ischoonover

This comment has been minimized.

Copy link

commented Oct 26, 2017

@tantra35 Size was 1GB. I ended up using s3cmd, which has rate limiting built in with --limit-rate

@joguSD

This comment has been minimized.

Copy link
Contributor

commented Jan 2, 2018

Implemented in #2997.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
You can’t perform that action at this time.