Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

S3 upload throttle: Add --upload.s3.target_mb_per_second parameter #294

Merged
merged 17 commits into from Dec 7, 2018

Conversation

dschneller
Copy link
Contributor

Boto2 unfortunately does not provide a bandwidth limiter for
S3 uploads. Instead, it will upload a completed backup as quickly
as possible, potentially consuming all available network bandwidth
and therefore impacting other applications.

This patch adds a very basic throttling mechanism for S3 uploads
by optionally hooking into the upload progress and determining
the current bandwidth. If it exceeds the designated maximum, the
upload thread will pause for a suitable amount of time (capped
at 3 seconds) before resuming.

While this is far from ideal, it is an easy to understand and
(from my experience) good enough method to protect other network
users from starvation.

Notice: The calculation happens per upload thread.

There already was a switch in place for the Python executable, but both the
readlink and cp commands use flags not present on the default macOS binaries.

This commit adds a check upfront and aborts with a message about you needing
the coreutils package from homebrew to get the GNU variants of both commands.
Allows specifying a custom location of the "tar" command to use.
Also, the flags sent to "tar" are sent individually (`tar -cf` becomes `tar -c -f`).

This allows easily customizing how the archiving is performed without having to add
lots of new options. For example, you could encrypt backup data via a simple shell script
and specify it for --archive.tar.binary:

```
#!/bin/bash
gpg_pubkey_id=XXXXXXX
new_args=""

while [ "${#}" -gt 0 ]; do
  case "$1" in
    -f)
      shift;
      original_output_file="${1}"
      shift
      new_args="${new_args} --to-stdout"
      ;;
    *)
      new_args="${new_args} ${1}"
      shift
      ;;
  esac
done

tar ${new_args} | gpg --always-trust --encrypt --recipient ${gpg_pubkey_id} -z 0 --output ${original_output_file}
```

This has several advantages:

* Backups are never written to disk unencrypted
* Encryption can be done in one go, instead of causing the potentially heavy additional
  I/O a separate encryption step would incur.
* It's transparent for the upload stages, so you can still benefit from the integrated
  S3 (or other) uploads.
The S3 uploader fails if bucket permissions are restricted to only allow
accessing certain prefixes in a bucket. The default behavior for boto's
"get_bucket()" is to "validate" it by accessing the bucket's root, needlessly
breaking the uploader even though all necessary permissions might be present.

This patch adds a new command line switch --upload.s3.skip_bucket_validation
to disable this behavior.
Boto2 unfortunately does not provide a bandwidth limiter for
S3 uploads. Instead, it will upload a completed backup as quickly
as possible, potentially consuming all available network bandwidth
and therefore impacting other applications.

This patch adds a very basic throttling mechanism for S3 uploads
by optionally hooking into the upload progress and determining
the current bandwidth. If it exceeds the designated maximum, the
upload thread will pause for a suitable amount of time (capped
at 3 seconds) before resuming.

While this is far from ideal, it is an easy to understand and
(from my experience) good enough method to protect other network
users from starvation.

Notice: The calculation happens per thread!
Copy link
Contributor

@timvaillancourt timvaillancourt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@timvaillancourt timvaillancourt merged commit c5f3008 into Percona-Lab:master Dec 7, 2018
@dschneller dschneller deleted the s3-upload-throttle branch December 13, 2018 12:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants