New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
S3 upload throttle: Add --upload.s3.target_mb_per_second parameter #294
Merged
timvaillancourt
merged 17 commits into
Percona-Lab:master
from
CenterDevice:s3-upload-throttle
Dec 7, 2018
Merged
S3 upload throttle: Add --upload.s3.target_mb_per_second parameter #294
timvaillancourt
merged 17 commits into
Percona-Lab:master
from
CenterDevice:s3-upload-throttle
Dec 7, 2018
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
There already was a switch in place for the Python executable, but both the readlink and cp commands use flags not present on the default macOS binaries. This commit adds a check upfront and aborts with a message about you needing the coreutils package from homebrew to get the GNU variants of both commands.
Allows specifying a custom location of the "tar" command to use. Also, the flags sent to "tar" are sent individually (`tar -cf` becomes `tar -c -f`). This allows easily customizing how the archiving is performed without having to add lots of new options. For example, you could encrypt backup data via a simple shell script and specify it for --archive.tar.binary: ``` #!/bin/bash gpg_pubkey_id=XXXXXXX new_args="" while [ "${#}" -gt 0 ]; do case "$1" in -f) shift; original_output_file="${1}" shift new_args="${new_args} --to-stdout" ;; *) new_args="${new_args} ${1}" shift ;; esac done tar ${new_args} | gpg --always-trust --encrypt --recipient ${gpg_pubkey_id} -z 0 --output ${original_output_file} ``` This has several advantages: * Backups are never written to disk unencrypted * Encryption can be done in one go, instead of causing the potentially heavy additional I/O a separate encryption step would incur. * It's transparent for the upload stages, so you can still benefit from the integrated S3 (or other) uploads.
The S3 uploader fails if bucket permissions are restricted to only allow accessing certain prefixes in a bucket. The default behavior for boto's "get_bucket()" is to "validate" it by accessing the bucket's root, needlessly breaking the uploader even though all necessary permissions might be present. This patch adds a new command line switch --upload.s3.skip_bucket_validation to disable this behavior.
Boto2 unfortunately does not provide a bandwidth limiter for S3 uploads. Instead, it will upload a completed backup as quickly as possible, potentially consuming all available network bandwidth and therefore impacting other applications. This patch adds a very basic throttling mechanism for S3 uploads by optionally hooking into the upload progress and determining the current bandwidth. If it exceeds the designated maximum, the upload thread will pause for a suitable amount of time (capped at 3 seconds) before resuming. While this is far from ideal, it is an easy to understand and (from my experience) good enough method to protect other network users from starvation. Notice: The calculation happens per thread!
timvaillancourt
suggested changes
Dec 6, 2018
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, please add new flag to https://github.com/Percona-Lab/mongodb_consistent_backup/blob/master/conf/mongodb-consistent-backup.example.conf
timvaillancourt
approved these changes
Dec 7, 2018
timvaillancourt
approved these changes
Dec 7, 2018
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Boto2 unfortunately does not provide a bandwidth limiter for
S3 uploads. Instead, it will upload a completed backup as quickly
as possible, potentially consuming all available network bandwidth
and therefore impacting other applications.
This patch adds a very basic throttling mechanism for S3 uploads
by optionally hooking into the upload progress and determining
the current bandwidth. If it exceeds the designated maximum, the
upload thread will pause for a suitable amount of time (capped
at 3 seconds) before resuming.
While this is far from ideal, it is an easy to understand and
(from my experience) good enough method to protect other network
users from starvation.
Notice: The calculation happens per upload thread.