Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[GOAL2-558] Update S3 Usage #124

Merged
merged 30 commits into from
Jul 30, 2019
Merged

[GOAL2-558] Update S3 Usage #124

merged 30 commits into from
Jul 30, 2019

Conversation

Karmastic
Copy link
Contributor

Builds on Eric's fork to address rough edges after testing all of the scenarios.
Tested everything I could. Algonet not tested but I'm working on a PR that updates it to set appropriate environment variables for nodecfg and log uploads (there's a new algonet IAM user).

There will be some manual work required to transition to the new buckets (need releases to exist in old and new buckets for the first new release, and need to update the bootstrappers). It should be transparent to most people.

Note that goal logging send is now secured -- aws doesn't support anonymous multi-part uploads so we'll have to distribute upload creds as-needed. This should almost eliminate our risk of S3 / billing abuse, with a slight inconvenience introduced when we need logs uploaded.

egieseke and others added 19 commits June 24, 2019 16:11
Use consistent naming for bucket name constants.
Replaced remaining references to S3_UPLOAD_ID and S3_UPLOAD_SECRET with AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
Removed temporary recipe geo-test-small from testdata.
a flag or through S3_UPLOAD_BUCKET environment variable.
Removed obsolete fileIterator.
Fixed scripts and rationalized usage of S3_UPLOAD_BUCKET
Fixed anonymous credentials
Logging now requires credentials - S3 doesn't allow anonymous MultiPartUploads
Standardized use of S3_RELEASE_BUCKET for builds; S3_UPLOAD_BUCKET only used for uploading logs.
@Karmastic
Copy link
Contributor Author

See Confluence for details / usage scenarios: https://algorand.atlassian.net/wiki/spaces/DEVOPS/pages/420118549/S3+Buckets+and+Policies

Copy link
Contributor

@egieseke egieseke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These changes look ok to me. I will retest with Algonet to make sure that works.

egieseke
egieseke previously approved these changes Jul 10, 2019
Copy link
Contributor

@egieseke egieseke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I successfully built and deployed using Algonet.

Copy link
Contributor

@winder winder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! Mostly nitpicks and minor suggestions.

.travis.yml Outdated
- name: release
if: branch =~ /^rel\/nightly/ AND type != pull_request
if: (branch =~ /^rel\/nightly/ OR branch =~ /^rel\/dev/) AND type != pull_request
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left this comment on https://github.com/algorand/go-algorand-internal/pull/3

To support more release branches than just rel/nightly, we should specifically exclude rel/stable rather than only include nightly:

This seems to work: branch != rel/stable AND branch =~ /^rel//

$ travis-conditions eval "branch != rel/stable AND branch =~ /^rel\//" --data '{"branch": "rel/stable"}'
false
$ travis-conditions eval "branch != rel/stable AND branch =~ /^rel\//" --data '{"branch": "rel/nightly"}'
true
$ travis-conditions eval "branch != rel/stable AND branch =~ /^rel\//" --data '{"branch": "master"}'
false

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Additionally, we could use the travis deployments for this rather than tasks. I'm using the github releases provider for SHD but we could use the script or s3 provider as well:
https://docs.travis-ci.com/user/deployment/script/
https://docs.travis-ci.com/user/deployment/s3/

if [[ ! -z "${S3_RELEASE_BUCKET}" && -z "${BUCKET}" ]]; then
echo "Ignoring S3_RELEASE_BUCKET setting - defaulting to algorand-internal. Use -b to override."
fi
S3_RELEASE_BUCKET="${BUCKET:-algorand-internal}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Could you set the algorand-internal default at the top of the file when the rest are initialized to empty? I think it will let you simplify the code a bit:

if [[ ! -z "${S3_RELEASE_BUCKET}" ]]; then
    echo "Ignoring S3_RELEASE_BUCKET setting - defaulting to algorand-internal.  Use -b to override."
fi

# If we really need this it would always be.
export S3_RELEASE_BUCKET="${BUCKET}"

Then just use -b "${BUCKET}" at the end.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Initializing prevents the ability to report this warning.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given the users of this script I don't think it's a major point, but with the current logic S3_RELEASE_BUCKET could be silently overwritten with the default value.

if [[ ! -z "${S3_RELEASE_BUCKET}" && -z "${BUCKET}" ]]; then
echo "Ignoring S3_RELEASE_BUCKET setting - defaulting to algorand-internal. Use -b to override."
fi
S3_RELEASE_BUCKET="${BUCKET:-algorand-internal}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above

if [[ ! -z "${S3_RELEASE_BUCKET}" && -z "${BUCKET}" ]]; then
echo "Ignoring S3_RELEASE_BUCKET setting - defaulting to algorand-internal. Use -b to override."
fi
S3_RELEASE_BUCKET="${BUCKET:-algorand-internal}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above

@@ -13,10 +13,12 @@ set -e
CHANNEL=""
FULLVERSION=""
S3CMD="s3cmd"
BUILD_BUCKET=${S3_RELEASE_BUCKET}
RELEASE_BUCKET="algorand-releases"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the pattern I like :)

@@ -50,9 +52,10 @@ function promote_stable() {
init_s3cmd

# Copy the _CHANNEL_ pending 'node' files to _CHANNEL-canary_
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update comment

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What comment update is needed?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's also copying the files to a new bucket

Remove pending_ from 'node' files, rename to CHANNEL-canary, and copy to the RELEASE_BUCKET

@@ -31,5 +31,5 @@ TARFILE=${TEMPDIR}/config_${CHANNEL}_${FULLVERSION}.tar.gz
cd $1
tar -zcf ${TARFILE} * >/dev/null 2>&1

${GOPATH}/bin/updater send -s ${TEMPDIR} -c ${CHANNEL}
${GOPATH}/bin/updater send -s ${TEMPDIR} -c ${CHANNEL} -b "${S3_RELEASE_BUCKET}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Has this environment variable been validated by another script which calls this one?

next struct {
path string
f *os.File
}
err error
}

func makeFileIterator(files []string, bucket string) s3manager.BatchUploadIterator {
func makeFileIterator(files []string, bucket string, subFolder string) s3manager.BatchUploadIterator {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The other function named this targetFolder, might as well keep them consistent.


func checkCredentialsRequired(action string, bucketName string) (required bool) {
required = true
if action == downloadAction && bucketName == s3DefaultReleaseBucket {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

spaces -> tab

})
}
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to check the session credentials in gotHelper.session? Would be nice to check directly that the anonymous credentials work. I guess it's implicitly tested in test5/test9 - maybe give those tests more specific names?

We no longer prefix with "pending_" and instead move from the build bucket to the release bucket.
winder
winder previously approved these changes Jul 29, 2019
Copy link
Contributor

@winder winder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@Karmastic Karmastic merged commit 7588e41 into master Jul 30, 2019
@Karmastic Karmastic deleted the david/newS3Buckets branch July 30, 2019 13:55
algorandskiy pushed a commit to algorandskiy/go-algorand that referenced this pull request Jun 15, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants