New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

S3 progress verbosity should be optional #519

Closed
ddarbyson opened this Issue Nov 30, 2013 · 24 comments

Comments

Projects
None yet
@ddarbyson

ddarbyson commented Nov 30, 2013

When using aws s3 sync the verbose output will provide updates for "Completed 101 of 109 part(s) with 4 file(s) remaining"... When piping the output to a log file the results are as shown:

upload: ../var/server/backup/websites/mysite/mysite_shadow.mig to s3://my-bucket/working/daily/websites/mysite/mysite_shadow.mig
Completed 101 of 109 part(s) with 4 file(s) remaining
upload: ../var/server/backup/websites/mysite/mysite_user.sql to s3://my-bucket/working/daily/websites/mysite/mysite_user.sql
Completed 102 of 109 part(s) with 3 file(s) remaining
upload: ../var/server/backup/websites/mysite/mysite_vhost.conf to s3://my-bucket/working/daily/websites/mysite/mysite_vhost.conf
Completed 103 of 109 part(s) with 2 file(s) remaining
Completed 104 of 109 part(s) with 2 file(s) remaining
Completed 105 of 109 part(s) with 2 file(s) remaining
Completed 106 of 109 part(s) with 2 file(s) remaining
Completed 107 of 109 part(s) with 2 file(s) remaining
Completed 108 of 109 part(s) with 2 file(s) remaining
upload: ../var/server/backup/websites/mysite/mysite_home.tar.gz to s3://my-bucket/working/daily/websites/mysite/mysite_home.tar.gz
Completed 108 of 109 part(s) with 1 file(s) remaining
Completed 109 of 109 part(s) with 1 file(s) remaining

Log files would read much better if the "Completed ... of ... parts(s) etc" was not displayed.

Having a new --[no-]progress option for the S3 commands would be nice.

@ddarbyson

This comment has been minimized.

Show comment
Hide comment
@ddarbyson

ddarbyson Feb 21, 2014

Is there anybody out there?

ddarbyson commented Feb 21, 2014

Is there anybody out there?

@ddarbyson

This comment has been minimized.

Show comment
Hide comment
@ddarbyson

ddarbyson Jun 8, 2015

I suppose this has been put on the back, back burner? Original comment is from 2013...

ddarbyson commented Jun 8, 2015

I suppose this has been put on the back, back burner? Original comment is from 2013...

@SirZach

This comment has been minimized.

Show comment
Hide comment
@SirZach

SirZach Jul 20, 2015

have you tried --quiet?

SirZach commented Jul 20, 2015

have you tried --quiet?

@beauhoyt

This comment has been minimized.

Show comment
Hide comment
@beauhoyt

beauhoyt Jul 28, 2015

This seems like something you should be able to solve with the unix tool chain.
Simply doing something like this would get you the result you desire:
aws s3 sync /source/folder s3://dest-bucket/ | grep -v 'file(s) remaining' > /var/log/some-aws-s3-sync-logfile-name

beauhoyt commented Jul 28, 2015

This seems like something you should be able to solve with the unix tool chain.
Simply doing something like this would get you the result you desire:
aws s3 sync /source/folder s3://dest-bucket/ | grep -v 'file(s) remaining' > /var/log/some-aws-s3-sync-logfile-name

@ddarbyson

This comment has been minimized.

Show comment
Hide comment
@ddarbyson

ddarbyson Jul 30, 2015

Nice idea @beauhoyt but the inverted grep doesn't fix the column progress from polluting log files. The only work around, which isn't ideal for logging, is the --only-show-errors option.

--quiet as suggested by @SirZach doesn't provide enough details if something goes wrong.

It would be nice to see an option such as --skip-progress to something similar to omit this updating terminal column and rows.

ddarbyson commented Jul 30, 2015

Nice idea @beauhoyt but the inverted grep doesn't fix the column progress from polluting log files. The only work around, which isn't ideal for logging, is the --only-show-errors option.

--quiet as suggested by @SirZach doesn't provide enough details if something goes wrong.

It would be nice to see an option such as --skip-progress to something similar to omit this updating terminal column and rows.

@SirZach

This comment has been minimized.

Show comment
Hide comment
@SirZach

SirZach Jul 30, 2015

@ddarbyson just make sure nothing goes wrong and you're good to go :)

SirZach commented Jul 30, 2015

@ddarbyson just make sure nothing goes wrong and you're good to go :)

@ddarbyson

This comment has been minimized.

Show comment
Hide comment
@ddarbyson

ddarbyson Jul 30, 2015

@SirZach if software development could be so easy :)

ddarbyson commented Jul 30, 2015

@SirZach if software development could be so easy :)

@nkadel-skyhook

This comment has been minimized.

Show comment
Hide comment
@nkadel-skyhook

nkadel-skyhook Sep 23, 2016

Fixing it with a grep wrapper is a horrible, horrible solution. I'd find it a very useful option for reporting transmitted files, without having to embed such grep wrappers into cron jobs.

nkadel-skyhook commented Sep 23, 2016

Fixing it with a grep wrapper is a horrible, horrible solution. I'd find it a very useful option for reporting transmitted files, without having to embed such grep wrappers into cron jobs.

@tomorsi

This comment has been minimized.

Show comment
Hide comment
@tomorsi

tomorsi Oct 4, 2016

i have a very long running sync, and it has to be restarted, is there anyway to have the s3 sync report where it is in verifying objects that have already been sync'd?

i am only getting "Completed 0 parts(0) with ... file(s) remaining" message.

tomorsi commented Oct 4, 2016

i have a very long running sync, and it has to be restarted, is there anyway to have the s3 sync report where it is in verifying objects that have already been sync'd?

i am only getting "Completed 0 parts(0) with ... file(s) remaining" message.

@benbunk

This comment has been minimized.

Show comment
Hide comment
@benbunk

benbunk Mar 17, 2017

Has anyone taken a stab at fixing this? I have a cron job backing up a 2.5 GB file to S3 and it generates nearly 10,000 lines of this in my logs and I would love to get rid of it:

Completed 1.0 MiB/2.5 GiB (4.9 MiB/s) with 1 file(s) remaining 

--quiet and --only-show-errors doesn't seem to make any difference in hiding the progress.

benbunk commented Mar 17, 2017

Has anyone taken a stab at fixing this? I have a cron job backing up a 2.5 GB file to S3 and it generates nearly 10,000 lines of this in my logs and I would love to get rid of it:

Completed 1.0 MiB/2.5 GiB (4.9 MiB/s) with 1 file(s) remaining 

--quiet and --only-show-errors doesn't seem to make any difference in hiding the progress.

@nkadel-skyhook

This comment has been minimized.

Show comment
Hide comment
@nkadel-skyhook

nkadel-skyhook Mar 17, 2017

nkadel-skyhook commented Mar 17, 2017

@jrottenberg

This comment has been minimized.

Show comment
Hide comment
@jrottenberg

jrottenberg May 5, 2017

@nkadel-skyhook I believe what is expected is to show the success, without the progress, observed :

Completed 108 of 109 part(s) with 2 file(s) remaining
upload: ../var/server/backup/websites/mysite/mysite_home.tar.gz to s3://my-bucket/working/daily/websites/mysite/mysite_home.tar.gz
Completed 108 of 109 part(s) with 1 file(s) remaining
Completed 109 of 109 part(s) with 1 file(s) remaining
copy ../var/server/backup/websites/mysite/mysite_home.tar.gz to s3://my-bucket/working/daily/websites/mysite/mysite_home.tar.gz

expected :

copy ../var/server/backup/websites/mysite/mysite_home.tar.gz to s3://my-bucket/working/daily/websites/mysite/mysite_home.tar.gz

jrottenberg commented May 5, 2017

@nkadel-skyhook I believe what is expected is to show the success, without the progress, observed :

Completed 108 of 109 part(s) with 2 file(s) remaining
upload: ../var/server/backup/websites/mysite/mysite_home.tar.gz to s3://my-bucket/working/daily/websites/mysite/mysite_home.tar.gz
Completed 108 of 109 part(s) with 1 file(s) remaining
Completed 109 of 109 part(s) with 1 file(s) remaining
copy ../var/server/backup/websites/mysite/mysite_home.tar.gz to s3://my-bucket/working/daily/websites/mysite/mysite_home.tar.gz

expected :

copy ../var/server/backup/websites/mysite/mysite_home.tar.gz to s3://my-bucket/working/daily/websites/mysite/mysite_home.tar.gz
@jrottenberg

This comment has been minimized.

Show comment
Hide comment
@jrottenberg

jrottenberg May 5, 2017

Ok was able to find a workaround, but it's not the most elegant...
aws s3 sync s3://src-bucket/ s3://dest-bucket/ | tr "\r" "\n" | grep -v '^Completed ' | logger -t sync

You got in syslog the actual copy message, successful or not, without the progress information.

jrottenberg commented May 5, 2017

Ok was able to find a workaround, but it's not the most elegant...
aws s3 sync s3://src-bucket/ s3://dest-bucket/ | tr "\r" "\n" | grep -v '^Completed ' | logger -t sync

You got in syslog the actual copy message, successful or not, without the progress information.

@nkadel-skyhook

This comment has been minimized.

Show comment
Hide comment
@nkadel-skyhook

nkadel-skyhook May 5, 2017

@jrottenberg I wouldn't consider feeding the output of an aws s3 sync to "logger" to be a reasonable approach. System logs are not necessarily visible to non-root users.

I would like an option similar to rsync's "-v" setting, which provides a simple record of each file as it is being transferred, and no churning report of the progress of the file transfer. awscli does not currently have such an option, as best I can tell. It's an all or nothing. setting.

nkadel-skyhook commented May 5, 2017

@jrottenberg I wouldn't consider feeding the output of an aws s3 sync to "logger" to be a reasonable approach. System logs are not necessarily visible to non-root users.

I would like an option similar to rsync's "-v" setting, which provides a simple record of each file as it is being transferred, and no churning report of the progress of the file transfer. awscli does not currently have such an option, as best I can tell. It's an all or nothing. setting.

@jrottenberg

This comment has been minimized.

Show comment
Hide comment
@jrottenberg

jrottenberg May 5, 2017

ah , the logger part is just to avoid the redirect to a file (it can fill up disk, no timestamp, etc : issues syslog addresses for me and I have awslogs for my users), the main point is on the tr and grep -v.

jrottenberg commented May 5, 2017

ah , the logger part is just to avoid the redirect to a file (it can fill up disk, no timestamp, etc : issues syslog addresses for me and I have awslogs for my users), the main point is on the tr and grep -v.

@vindimy

This comment has been minimized.

Show comment
Hide comment
@vindimy

vindimy May 26, 2017

+1 would like this implemented. I work for a fairly large org and we use AWS services extensively. Scheduling backups to S3 is a pain in the ass because awscli s3 sync generates unreadable garbage output when redirected to a file, making it impossible to verify or detect when something goes wrong. For us --only-show-errors is not a good option because we frequently run into issues troubleshooting long-running syncs and have no idea of knowing if/where it's stuck or not.

vindimy commented May 26, 2017

+1 would like this implemented. I work for a fairly large org and we use AWS services extensively. Scheduling backups to S3 is a pain in the ass because awscli s3 sync generates unreadable garbage output when redirected to a file, making it impossible to verify or detect when something goes wrong. For us --only-show-errors is not a good option because we frequently run into issues troubleshooting long-running syncs and have no idea of knowing if/where it's stuck or not.

@nkadel-skyhook

This comment has been minimized.

Show comment
Hide comment
@nkadel-skyhook

nkadel-skyhook Jul 6, 2017

That option of piping the output through the scripting reported above is useful : is useful. As a pure matter of form, I'd urge using ' or " consistently, rather than mixing them, so I'd use this below.

  • | tr '\r' '\n' | sed '/^Completed/d'

This also avoids the "grep fails to see any text that does not say '^Completed', and therefore reports an error" of the original tool.

The remaining danger is that this pipes the output through a "sed" command and can return the results of the "sed" command, not the results of a failed "aws s3 sync" command. To avoid missing error reports, precede the command in a "bash" shell script with "set -o pipefail. Something like this:

  • set -o pipefail; aws s3 sync dirname/ s3://bucketname/dirname/ | tr '\r' '\n' | sed '/^Completed/d'

Autocorrect kept trying to change "pipefail" to "pipefile"!

nkadel-skyhook commented Jul 6, 2017

That option of piping the output through the scripting reported above is useful : is useful. As a pure matter of form, I'd urge using ' or " consistently, rather than mixing them, so I'd use this below.

  • | tr '\r' '\n' | sed '/^Completed/d'

This also avoids the "grep fails to see any text that does not say '^Completed', and therefore reports an error" of the original tool.

The remaining danger is that this pipes the output through a "sed" command and can return the results of the "sed" command, not the results of a failed "aws s3 sync" command. To avoid missing error reports, precede the command in a "bash" shell script with "set -o pipefail. Something like this:

  • set -o pipefail; aws s3 sync dirname/ s3://bucketname/dirname/ | tr '\r' '\n' | sed '/^Completed/d'

Autocorrect kept trying to change "pipefail" to "pipefile"!

@schmorla

This comment has been minimized.

Show comment
Hide comment
@schmorla

schmorla Jul 17, 2017

Another +1 for this.

schmorla commented Jul 17, 2017

Another +1 for this.

@enriccalabuig

This comment has been minimized.

Show comment
Hide comment
@enriccalabuig

enriccalabuig Jul 26, 2017

Totally necessary IMO. I run aws s3 sync in a daily cron and now it makes monitoring progress by a mailto a nightmare.

enriccalabuig commented Jul 26, 2017

Totally necessary IMO. I run aws s3 sync in a daily cron and now it makes monitoring progress by a mailto a nightmare.

@hansw96

This comment has been minimized.

Show comment
Hide comment
@hansw96

hansw96 Aug 1, 2017

+1 for this

hansw96 commented Aug 1, 2017

+1 for this

@rickripe

This comment has been minimized.

Show comment
Hide comment
@rickripe

rickripe Sep 6, 2017

+1. I can't believe that 4 years later someone has picked this up only to have it stalled with "needs-review" status on a pull request 2747. This is rather tragic that it took so long for someone to do the work and now we wait (again) for someone to review & approve. I was going to pull down the code and attempt it myself.

rickripe commented Sep 6, 2017

+1. I can't believe that 4 years later someone has picked this up only to have it stalled with "needs-review" status on a pull request 2747. This is rather tragic that it took so long for someone to do the work and now we wait (again) for someone to review & approve. I was going to pull down the code and attempt it myself.

@milh0use

This comment has been minimized.

Show comment
Hide comment
@milh0use

milh0use Sep 15, 2017

Yes please, +1

milh0use commented Sep 15, 2017

Yes please, +1

@joguSD

This comment has been minimized.

Show comment
Hide comment
@joguSD

joguSD Oct 13, 2017

Contributor

Implemented in #2747

You should now be able to specify the --no-progress flag to opt out of progress related message while still getting more information than --quiet.

Contributor

joguSD commented Oct 13, 2017

Implemented in #2747

You should now be able to specify the --no-progress flag to opt out of progress related message while still getting more information than --quiet.

@joguSD joguSD closed this Oct 13, 2017

@wrunderwood

This comment has been minimized.

Show comment
Hide comment
@wrunderwood

wrunderwood Dec 14, 2017

It looks like progress messages are sent to stdout and errors to stderr. Sending stdout to /dev/null works for me:

aws s3 cp a b > /dev/null

wrunderwood commented Dec 14, 2017

It looks like progress messages are sent to stdout and errors to stderr. Sending stdout to /dev/null works for me:

aws s3 cp a b > /dev/null

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment