Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Option to upload zips to S3 to bypass size limit & multi-part uploads #248

Closed
franciscocpg opened this issue Aug 18, 2017 · 9 comments
Closed

Comments

@franciscocpg
Copy link
Contributor

I'm trying to deploy a package that has a size of 109 MB (so, more than 50MB Lambda function deployment package size but less than 250MB Size of code/dependencies that you can zip into a deployment package, ref: http://docs.aws.amazon.com/lambda/latest/dg/limits.html).
To get rid of this 50MB deployment package size limitation we could do the following steps if zip file is larger than 50 MB (and less than 250 MB, of course) :

  1. create a S3 bucket (if not exists).
  2. use putObject to upload zip file to the bucket.
  3. Create/Update lambda function using S3Bucket and S3Key fields of FunctionCode/UpdateFunctionCodeInput structs (in replacement of ZipFile field).

What do you think about this solution?
May I propose a PR?

@tj
Copy link
Member

tj commented Aug 18, 2017

Hmmm yeah maybe a PR. I'd like to avoid it if possible since the FunctionCode stuff is so simple, no need to clean up after old functions etc, but I'm not opposed to S3 either. I thinkkkkk they may be raising this limit soon, so maybe it's not worth adding quite yet

@tj tj changed the title get rid of Lambda's limit of 52 MB Option to upload zips to S3 Aug 18, 2017
@tj tj changed the title Option to upload zips to S3 Option to upload zips to S3 to bypass size limit Aug 18, 2017
@tj
Copy link
Member

tj commented Aug 18, 2017

Actually, maybe s3-only would be good at some point, we could do the multi-part upload to speed things up. Out of curiosity are you using Node? Maybe bundling will help there in the meantime.

@franciscocpg
Copy link
Contributor Author

@tj
About the bundling size, after playing a while with .upignore I was able to reduce it to 20 MB.
But that doesn't invalidate this issue 😃.
AWS recommends using multipart upload when object sizes reach 100 MB. I'd stick with putObject now and maybe evolve to multipart upload later.

@tj
Copy link
Member

tj commented Aug 21, 2017

@franciscocpg nice! Yeah I agree, long-term it would be sweet if we could chunk it and speed those up

@tj tj changed the title Option to upload zips to S3 to bypass size limit Option to upload zips to S3 to bypass size limit & multi-part uploads Aug 21, 2017
@komuw
Copy link

komuw commented Aug 22, 2017

I ran into a somewhat related issue but not exactly the same.

I'm trying to deploy a package that has a size of 38 MB. So it is still smaller than the 50MB limit. But I'm on an internet connection that is bad (320 Kbps).

During deployment it takes a long time then fails with: InvalidSignatureException: Signature expired .
Some google-fu shows that this is a clock skew error[1] which is fixed on the aws-sdk for js but it isn't fixed yet on the golang one[2]

So probably multi-part uploads would help also in cases where the file size is less than 50MB but the internet connection is bad.
The only problem is; from the point of view of up how does it determine poor internet connections and what is the optimal size of each part that it needs to upload. I'm sure this are solved questions in cloud stuff but I'm not sure whether we would want up to incorporate them.

  1. InvalidSignatureException: Signature expired aws/aws-sdk-js#527
  2. Perform Clock Skew Correction in Go SDK aws/aws-sdk-go#423

@franciscocpg
Copy link
Contributor Author

Hi @komuw
Take a look at this specific comment aws/aws-sdk-js#527 (comment).
So the user was having troubles trying to use aws cli with --zip-file (that's what up is doing now) over a slow network and the solution for him was first uploading to s3 and then using aws cli with the --s3-bucket and --s3-key options (that's what I am proposing in this issue, 😃).

Also serverless framework which is a mature and well battle tested framework right now always use s3 to deploy their zip package. They have just 1 issue serverless/serverless#27 related to InvalidSignatureException: Signature expired about two years ago and it looks like that at that time it was uploading zip directly to lambda.

IMO up should stick with s3-only for uploading zip package and that's fine.

@tj
Copy link
Member

tj commented Aug 22, 2017

Yeah as long as we "clean" the bucket so it's not littered with old deploys, things should be ok, just a bit more manual work. I believe the minimum chunk size is 5mb, though on a bad connection maybe ~4-5 5mb chunks won't really improve uploads much. CI is definitely a nicer option there if possible

@filmaj
Copy link

filmaj commented Oct 23, 2017

+1, have seen this happen on crappy internet when running up -v:

     0s      DEBU event platform.build.complete map[duration:462.61547ms]
     701ms   DEBU checking for role
     183ms   DEBU creating role
     193ms   DEBU attaching policy
     2ms     DEBU set role to arn:aws:iam::blahblah
     2ms     DEBU event platform.deploy map[stage:development region:us-west-1]
     9m0.164s DEBU fetching function config region=us-west-11
     0s      DEBU event platform.function.create map[stage:development region:us-west-1]
   ⠧ 0s      DEBU event platform.deploy.complete map[region:us-west-1 duration:9m0.168517587s stage:development]
     0s      DEBU event platform.deploy.complete map[region:us-west-1 duration:9m0.168517587s stage:development]
   ⠧ 0s      DEBU event deploy.complete map[duration:9m1.708846255s]
  Error: deploying: us-west-1: creating function: InvalidSignatureException: Signature expired: 20171023T033000Z is now earlier than 20171023T033401Z (20171023T033901Z - 5 min.)
	status code: 403, request id: blahblah

@franciscocpg
Copy link
Contributor Author

Closed by #272

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants