Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[GOAL2-558] Update S3 Usage #124

Merged
merged 30 commits into from Jul 30, 2019

Conversation

@Karmastic
Copy link
Contributor

Karmastic commented Jul 8, 2019

Builds on Eric's fork to address rough edges after testing all of the scenarios.
Tested everything I could. Algonet not tested but I'm working on a PR that updates it to set appropriate environment variables for nodecfg and log uploads (there's a new algonet IAM user).

There will be some manual work required to transition to the new buckets (need releases to exist in old and new buckets for the first new release, and need to update the bootstrappers). It should be transparent to most people.

Note that goal logging send is now secured -- aws doesn't support anonymous multi-part uploads so we'll have to distribute upload creds as-needed. This should almost eliminate our risk of S3 / billing abuse, with a slight inconvenience introduced when we need logs uploaded.

egieseke and others added 19 commits Jun 24, 2019
synch with master
variables S3_UPLOAD_BUCKET and S3_REGION.
create_and_deploy_recipe.sh script.
…sting

file in util/s3.
Use consistent naming for bucket name constants.
Replaced remaining references to S3_UPLOAD_ID and S3_UPLOAD_SECRET with AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
Removed temporary recipe geo-test-small from testdata.
a flag or through S3_UPLOAD_BUCKET environment variable.
Removed obsolete fileIterator.
Fixed scripts and rationalized usage of S3_UPLOAD_BUCKET
Fixed anonymous credentials
Logging now requires credentials - S3 doesn't allow anonymous MultiPartUploads
Karmastic
Standardized use of S3_RELEASE_BUCKET for builds; S3_UPLOAD_BUCKET only used for uploading logs.
Karmastic
Karmastic
Karmastic
@Karmastic

This comment has been minimized.

Copy link
Contributor Author

Karmastic commented Jul 8, 2019

Karmastic added 2 commits Jul 8, 2019
Karmastic
Karmastic
Copy link
Contributor

egieseke left a comment

These changes look ok to me. I will retest with Algonet to make sure that works.

Copy link
Contributor

egieseke left a comment

I successfully built and deployed using Algonet.

Copy link
Member

winder left a comment

Looks great! Mostly nitpicks and minor suggestions.

.travis.yml Outdated
- name: release
if: branch =~ /^rel\/nightly/ AND type != pull_request
if: (branch =~ /^rel\/nightly/ OR branch =~ /^rel\/dev/) AND type != pull_request

This comment has been minimized.

Copy link
@winder

winder Jul 10, 2019

Member

I left this comment on algorand/go-algorand-internal#3

To support more release branches than just rel/nightly, we should specifically exclude rel/stable rather than only include nightly:

This seems to work: branch != rel/stable AND branch =~ /^rel//

$ travis-conditions eval "branch != rel/stable AND branch =~ /^rel\//" --data '{"branch": "rel/stable"}'
false
$ travis-conditions eval "branch != rel/stable AND branch =~ /^rel\//" --data '{"branch": "rel/nightly"}'
true
$ travis-conditions eval "branch != rel/stable AND branch =~ /^rel\//" --data '{"branch": "master"}'
false

This comment has been minimized.

Copy link
@winder

winder Jul 10, 2019

Member

Additionally, we could use the travis deployments for this rather than tasks. I'm using the github releases provider for SHD but we could use the script or s3 provider as well:
https://docs.travis-ci.com/user/deployment/script/
https://docs.travis-ci.com/user/deployment/s3/

if [[ ! -z "${S3_RELEASE_BUCKET}" && -z "${BUCKET}" ]]; then
echo "Ignoring S3_RELEASE_BUCKET setting - defaulting to algorand-internal. Use -b to override."
fi
S3_RELEASE_BUCKET="${BUCKET:-algorand-internal}"

This comment has been minimized.

Copy link
@winder

winder Jul 10, 2019

Member

nit: Could you set the algorand-internal default at the top of the file when the rest are initialized to empty? I think it will let you simplify the code a bit:

if [[ ! -z "${S3_RELEASE_BUCKET}" ]]; then
    echo "Ignoring S3_RELEASE_BUCKET setting - defaulting to algorand-internal.  Use -b to override."
fi

# If we really need this it would always be.
export S3_RELEASE_BUCKET="${BUCKET}"

Then just use -b "${BUCKET}" at the end.

This comment has been minimized.

Copy link
@Karmastic

Karmastic Jul 25, 2019

Author Contributor

Initializing prevents the ability to report this warning.

This comment has been minimized.

Copy link
@winder

winder Jul 26, 2019

Member

Given the users of this script I don't think it's a major point, but with the current logic S3_RELEASE_BUCKET could be silently overwritten with the default value.

if [[ ! -z "${S3_RELEASE_BUCKET}" && -z "${BUCKET}" ]]; then
echo "Ignoring S3_RELEASE_BUCKET setting - defaulting to algorand-internal. Use -b to override."
fi
S3_RELEASE_BUCKET="${BUCKET:-algorand-internal}"

This comment has been minimized.

Copy link
@winder

winder Jul 10, 2019

Member

Same as above

if [[ ! -z "${S3_RELEASE_BUCKET}" && -z "${BUCKET}" ]]; then
echo "Ignoring S3_RELEASE_BUCKET setting - defaulting to algorand-internal. Use -b to override."
fi
S3_RELEASE_BUCKET="${BUCKET:-algorand-internal}"

This comment has been minimized.

Copy link
@winder

winder Jul 10, 2019

Member

Same as above

@@ -13,10 +13,12 @@ set -e
CHANNEL=""
FULLVERSION=""
S3CMD="s3cmd"
BUILD_BUCKET=${S3_RELEASE_BUCKET}
RELEASE_BUCKET="algorand-releases"

This comment has been minimized.

Copy link
@winder

winder Jul 10, 2019

Member

This is the pattern I like :)

@@ -50,9 +52,10 @@ function promote_stable() {
init_s3cmd

# Copy the _CHANNEL_ pending 'node' files to _CHANNEL-canary_

This comment has been minimized.

Copy link
@winder

winder Jul 10, 2019

Member

Update comment

This comment has been minimized.

Copy link
@Karmastic

Karmastic Jul 25, 2019

Author Contributor

What comment update is needed?

This comment has been minimized.

Copy link
@winder

winder Jul 26, 2019

Member

It's also copying the files to a new bucket

Remove pending_ from 'node' files, rename to CHANNEL-canary, and copy to the RELEASE_BUCKET

@@ -31,5 +31,5 @@ TARFILE=${TEMPDIR}/config_${CHANNEL}_${FULLVERSION}.tar.gz
cd $1
tar -zcf ${TARFILE} * >/dev/null 2>&1

${GOPATH}/bin/updater send -s ${TEMPDIR} -c ${CHANNEL}
${GOPATH}/bin/updater send -s ${TEMPDIR} -c ${CHANNEL} -b "${S3_RELEASE_BUCKET}"

This comment has been minimized.

Copy link
@winder

winder Jul 10, 2019

Member

Has this environment variable been validated by another script which calls this one?

next struct {
path string
f *os.File
}
err error
}

func makeFileIterator(files []string, bucket string) s3manager.BatchUploadIterator {
func makeFileIterator(files []string, bucket string, subFolder string) s3manager.BatchUploadIterator {

This comment has been minimized.

Copy link
@winder

winder Jul 10, 2019

Member

The other function named this targetFolder, might as well keep them consistent.


func checkCredentialsRequired(action string, bucketName string) (required bool) {
required = true
if action == downloadAction && bucketName == s3DefaultReleaseBucket {

This comment has been minimized.

Copy link
@winder

winder Jul 10, 2019

Member

spaces -> tab

})
}
}

This comment has been minimized.

Copy link
@winder

winder Jul 10, 2019

Member

Is it possible to check the session credentials in gotHelper.session? Would be nice to check directly that the anonymous credentials work. I guess it's implicitly tested in test5/test9 - maybe give those tests more specific names?

We no longer prefix with "pending_" and instead move from the build bucket to the release bucket.
Copy link
Member

winder left a comment

LGTM

Karmastic added 2 commits Jul 29, 2019
@winder
winder approved these changes Jul 30, 2019
@Karmastic Karmastic merged commit 7588e41 into master Jul 30, 2019
3 checks passed
3 checks passed
Travis CI - Branch Build Passed
Details
Travis CI - Pull Request Build Passed
Details
license/cla Contributor License Agreement is signed.
Details
@Karmastic Karmastic deleted the david/newS3Buckets branch Jul 30, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants
You can’t perform that action at this time.