Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TimeoutError: Connection timed out after 120000ms #611

Closed
oran1248 opened this issue Sep 19, 2020 · 25 comments
Closed

TimeoutError: Connection timed out after 120000ms #611

oran1248 opened this issue Sep 19, 2020 · 25 comments

Comments

@oran1248
Copy link

oran1248 commented Sep 19, 2020

Describe the bug
When I run serverless command, after something like ~6 minutes, I get the following error:
TimeoutError: Connection timed out after 120000ms

api-lambda folder size is 189MB
default-lambda folder size is 100MB

To Reproduce
I just run serverless.

Expected behavior
Deployed successfully.

Screenshots
image

Desktop (please complete the following information):

  • OS: Windows
  • Version 10
@dphang
Copy link
Collaborator

dphang commented Sep 19, 2020

I think this may be due to AWS Lambda upload timeout, your api-lambda folder size seems pretty big so maybe there are too many files to upload, or you may have a slow network.

I don't think you're hitting Lambda 50 MB limit as there should be a different error message.

I believe you should be able to set the AWS_CLIENT_TIMEOUT: serverless/serverless#937 and see if that helps.

@oran1248
Copy link
Author

I think this may be due to AWS Lambda upload timeout, your api-lambda folder size seems pretty big so maybe there are too many files to upload, or you may have a slow network.

I don't think you're hitting Lambda 50 MB limit as there should be a different error message.

I believe you should be able to set the AWS_CLIENT_TIMEOUT: serverless/serverless#937 and see if that helps.

@dphang Thanks, I will try it and update. I dont understand why files are so big, my project is a standard project. Why aws has this limitation anyway? facing too many issues for simple deploy :(

@danielcondemarin
Copy link
Contributor

What's the output when you run serverless --debug?

@oran1248
Copy link
Author

What's the output when you run serverless --debug?

image

@danielcondemarin
Copy link
Contributor

What's the output when you run serverless --debug?

image

You're missing a dash, should be --debug

@oran1248
Copy link
Author

What's the output when you run serverless --debug?

image

You're missing a dash, should be --debug

You are right but it is the same message at the end. the last DEBUG message is:
DEBUG ─ Creating lambda X in the us-east-1 region. and than what in the screenshot above.

@danielcondemarin
Copy link
Contributor

Do you have a .env file at the same directory as serverless.yml?

AWS_ACCESS_KEY_ID=...
AWS_SECRET_ACCESS_KEY=...

@oran1248
Copy link
Author

Do you have a .env file at the same directory as serverless.yml?

AWS_ACCESS_KEY_ID=...
AWS_SECRET_ACCESS_KEY=...

I have already fixed the credentials issue, the deployment process passed that phase. I set the right environment variables. now trying with AWS_CLIENT_TIMEOUT=600000 as @dphang suggested.
Unfortunately because of all the failed deployments, I've got an email from aws saying that: Your AWS account X has exceeded 85% of the usage limit for one or more AWS Free Tier-eligible services for the month of September
😭

@oran1248
Copy link
Author

I think this may be due to AWS Lambda upload timeout, your api-lambda folder size seems pretty big so maybe there are too many files to upload, or you may have a slow network.

I don't think you're hitting Lambda 50 MB limit as there should be a different error message.

I believe you should be able to set the AWS_CLIENT_TIMEOUT: serverless/serverless#937 and see if that helps.

Still got the same error... even after running export AWS_CLIENT_TIMEOUT=600000 before serverless

@dphang
Copy link
Collaborator

dphang commented Sep 19, 2020

I think this may be due to AWS Lambda upload timeout, your api-lambda folder size seems pretty big so maybe there are too many files to upload, or you may have a slow network.
I don't think you're hitting Lambda 50 MB limit as there should be a different error message.
I believe you should be able to set the AWS_CLIENT_TIMEOUT: serverless/serverless#937 and see if that helps.

Still got the same error... even after running export AWS_CLIENT_TIMEOUT=600000 before serverless

Ah, sorry about that, I've been using regular serverless too much as well so I confused the two, I think AWS_CLIENT_TIMEOUT is just for regular serverless..

Since this is a serverless component we call aws-sdk to create Lambda directly in this component. The code is here:

const res = await lambda.createFunction(params).promise();
maybe you can try to clone and build this repo and update the aws-sdk client timeout in a similar way as serverless/serverless#937? (We should make the change in this component, but it would need a PR).

Besides this, I don't think it should normally be taking more than 120 seconds to upload the Lambda. Do you have a slow network connection? API lambda folder size of 189 MB is probably close to 50 MB zipped, which is 400 Mbits, to upload in 120 seconds you need at least 3.33 Mbit/s upload speed. It would be fine in CI/CD, but on your computer, I understand it may not. Alternatively, try to reduce any large dependencies you may have to reduce the API Lambda size.

@oran1248
Copy link
Author

I think this may be due to AWS Lambda upload timeout, your api-lambda folder size seems pretty big so maybe there are too many files to upload, or you may have a slow network.
I don't think you're hitting Lambda 50 MB limit as there should be a different error message.
I believe you should be able to set the AWS_CLIENT_TIMEOUT: serverless/serverless#937 and see if that helps.

Still got the same error... even after running export AWS_CLIENT_TIMEOUT=600000 before serverless

Ah, sorry about that, I've been using regular serverless too much as well so I confused the two, I think AWS_CLIENT_TIMEOUT is just for regular serverless..

Since this is a serverless component we call aws-sdk to create Lambda directly in this component. The code is here:

const res = await lambda.createFunction(params).promise();

maybe you can try to clone and build this repo and update the aws-sdk client timeout in a similar way as serverless/serverless#937? (We should make the change in this component, but it would need a PR).
Besides this, I don't think it should normally be taking more than 120 seconds to upload the Lambda. Do you have a slow network connection? API lambda folder size of 189 MB is probably close to 50 MB zipped, which is 400 Mbits, to upload in 120 seconds you need at least 3.33 Mbit/s upload speed. It would be fine in CI/CD, but on your computer, I understand it may not. Alternatively, try to reduce any large dependencies you may have to reduce the API Lambda size.

I have a fast internet connection. I really don't understand, it seems a common use-case to me. Am I the only one that uploads next.js project into aws lambda? Something doesn't make any sense. should I switch to Google Cloud function?
Which service should I deploy to such that it will work?

@dphang
Copy link
Collaborator

dphang commented Sep 19, 2020

I see, thanks for clarifying. If you have a fast connection, then I really don't understand why it should take more than 120 seconds either.

Yes, uploading to AWS Lambda is what this component does and is a common use case. What I meant is that I've not seen a case where it took more than 120 seconds to upload to aws-lambda except when it's due to having slow network and trying to upload a large file. So that's why I first thought it was your network or some problem with your API handler's contents or the Lambda ZIP itself...

As for other options to help debug (besides what I've given above):

  1. Manually zipping and uploading the API Lambda via S3 and see if that works.
  2. Create a script to zip and use aws-sdk to upload the Lambda and set a timeout > 120 sec (if you don't want to build and locally modify this project, you can try that, maybe it's quicker).
  3. Does a basic project (e.g create-next-app) deploy fine for you?

In the meantime, I'll see about creating a PR to allow users to set the aws-sdk timeout to higher than default of 120 sec.

Also, Serverless-next.js currently just targets AWS, and I don't have experience with Google Cloud, so unfortunately I can't help much with that...

@oran1248
Copy link
Author

oran1248 commented Sep 19, 2020

I see, thanks for clarifying. If you have a fast connection, then I really don't understand why it should take more than 120 seconds either.

Yes, uploading to AWS Lambda is what this component does and is a common use case. What I meant is that I've not seen a case where it took more than 120 seconds to upload to aws-lambda except when it's due to having slow network and trying to upload a large file. So that's why I first thought it was your network or some problem with your API handler's contents or the Lambda ZIP itself...

As for other options to help debug (besides what I've given above):

  1. Manually zipping and uploading the API Lambda via S3 and see if that works.
  2. Create a script to zip and use aws-sdk to upload the Lambda and set a timeout > 120 sec (if you don't want to build and locally modify this project, you can try that, maybe it's quicker).

In the meantime, I'll see about creating a PR to allow users to set the aws-sdk timeout to higher than default of 120 sec.

Also, Serverless-next.js currently just targets AWS, and I don't have experience with Google Cloud, so unfortunately I can't help much with that...

Thank you for the help!
Few questions about the options that you've listed above:

  1. How it is going to help me? It isn't a full deployment right?
  2. I guess I can try it.
  3. I tried it and it worked:
    image
    For some reason I couldn't set bucketName, this is my serverless.yml:
# serverless.yml

example-app:
  component: "@sls-next/serverless-component@1.15.1"
  input: 
    bucketName: "example-app-static"
    

Also, why there are no any functions listed in the lambda functions section after deployment?
image

@dphang
Copy link
Collaborator

dphang commented Sep 19, 2020

@oran1248 yeah, for (1) that was just to see if your zip and API Lambda works if uploaded another way e.g using S3, maybe aws-sdk which hits the Lambda upload server was having trouble reading it hence timing out after 120 secs.

69 seconds seems ok to me (assuming it had to create a new CloudFront distribution?).

Another small thing I thought of, maybe even if your connection is fast but you are quite far from us-east-1 (North Virginia), that could possibly affect things too, especially when having to upload a large ZIP. You could also try to setup a simple CI workflow e.g GitHub Actions workflow and see if the deployment works from there. The machines run in the US and I believe have at least 1 Gbps connections. I am in Seattle (us west) and have a 100 Mbps/50 Mbps upload connection, so it's not a problem for me either way, though I mostly don't manually deploy (I use GitHub Actions workflows).

Sorry for all the trouble, but at least we now know of some places to improve upon.

PS: Also I noticed you are using 1.15.1, it might be worth upgrading to 1.17 for the latest fixes and features. Though I don't think the Lambda upload code was modified then, so it probably won't fix this issue.

@dphang
Copy link
Collaborator

dphang commented Sep 19, 2020

Also, why there are no any functions listed in the lambda functions section after deployment?

The lambdas should all be in us-east-1, check if you are in the right region. This component deploys on Lambda@Edge so it gets replicated across all of CloudFront locations but the base Lambda is created in us-east-1.

@oran1248
Copy link
Author

oran1248 commented Sep 19, 2020

What do you mean by assuming it had to create a new CloudFront distribution? ?
Every time I deploy my app it creates a new distibution?

Changed to 1.17 👍

Now that I've changed the region to ua-east-1 I can see all the lambdas.
The red ones belong to create-next-app.
The blue ones belong to my original (heavy?) project.
image

Does ~47MB lambda size is reasonable?

P.S - Setting the bucketName is not working for me, maybe do you know why? This is my serverless.yml content:

# serverless.yml

example-app:
  component: "@sls-next/serverless-component@1.15.1"
  input: 
    bucketName: "example-app-static"

UPDATE: I've succeeded deploying to vercel 😃 there is a reason to use aws instead?

@dphang
Copy link
Collaborator

dphang commented Sep 20, 2020

What do you mean by assuming it had to create a new CloudFront distribution? ?
Every time I deploy my app it creates a new distibution?

When you create a new app deployment (e.g for the first time) and also if you did not sync the .serverless state, it will create a new distribution. Recently, there is a way to specifiy in serverless.yml and input to choose an existing distribution to use.

Changed to 1.17 👍

Now that I've changed the region to ua-east-1 I can see all the lambdas.
The red ones belong to create-next-app.
The blue ones belong to my original (heavy?) project.
image

Does ~47MB lambda size is reasonable?

It's pretty close considering 50 MB is the limit. But it can happen if you have a bunch of API routes and dependencies, as the normal target will bundle all dependencies into each route. experimental-serverless-trace could help reduce duplicate dependencies but I found the performance may be worse because of having to require more files.

P.S - Setting the bucketName is not working for me, maybe do you know why? This is my serverless.yml content:

# serverless.yml

example-app:
  component: "@sls-next/serverless-component@1.15.1"
  input: 
    bucketName: "example-app-static"

UPDATE: I've succeeded deploying to vercel 😃 there is a reason to use aws instead?

Yes, Vercel is obviously a great choice as they made Next.js after all, so they would have all the features and optimizations. I'm pretty sure they also use vanilla AWS Lambda instead of Lambda@Edge for their SSR pages / API routes. It has a simpler UX and is well-integrated into GitHub, Bitbucket etc. However, do note their limits, and if you need more than the limits, it can cost you (e.g for a team/business it's $20/month/user).

Personally I found that Vercel's page performance on cold start is worse. I think it's because they use serverless-trace, which reduces dependency code duplication, but seems to cause worse performance (probably due to having to require way more files instead of a single bundled JS with the normal serverless target). See: vercel/next.js#16276. Basically, Lambda code size has little to negligible impact on performance, but require time has a huge impact.

The advantage of serverless-next.js is that you can manage your AWS resources and easily integrate with other AWS resources if you are already using AWS. It also may be cheaper in terms of money (but maybe costs you slightly more time to manage AWS infra) and if you need a feature or found a bug, it's open source so you can just create a PR.

@oran1248
Copy link
Author

@dphang

The advantage of serverless-next.js is that you can manage your AWS resources and easily integrate with other AWS resources if you are already using AWS.

Let's say I'm also using S3 to store user's file on my site, how is this related to the fact I'm deploying with serverless-next.js? Which integration is needed?

yeah, for (1) that was just to see if your zip and API Lambda works if uploaded another way e.g using S3, maybe aws-sdk which hits the Lambda upload server was having trouble reading it hence timing out after 120 secs.

So I need to zip .serverless_nextjs folder and upload to S3? It will automatically recognize and create lambdas? 🤔

Create a script to zip and use aws-sdk to upload the Lambda and set a timeout > 120 sec (if you don't want to build and locally modify this project, you can try that, maybe it's quicker)

I don't know if creating my own script is a good idea, I'm guessing you are doing a lot of magic things inside serverless-next.js code. Don't you think?

maybe you can try to clone and build this repo and update the aws-sdk client timeout in a similar way as serverless/serverless#937? (We should make the change in this component, but it would need a PR).

So the problem is that serverless-next.js doesn't have an option for setting AWS_CLIENT_TIMEOUT but aws-sdk does?
Do you think clone and build this repo is a good idea? How can I easily do it?
I really want to solve this issue already 🙏

@dphang
Copy link
Collaborator

dphang commented Sep 20, 2020

@dphang

The advantage of serverless-next.js is that you can manage your AWS resources and easily integrate with other AWS resources if you are already using AWS.

Let's say I'm also using S3 to store user's file on my site, how is this related to the fact I'm deploying with serverless-next.js? Which integration is needed?

Well, that's probably not directly related. I mean things like you can share ACM certificates with other services like your API, use Web Application Firewall, etc.

yeah, for (1) that was just to see if your zip and API Lambda works if uploaded another way e.g using S3, maybe aws-sdk which hits the Lambda upload server was having trouble reading it hence timing out after 120 secs.

So I need to zip .serverless_nextjs folder and upload to S3? It will automatically recognize and create lambdas? 🤔

See here: https://aws.amazon.com/about-aws/whats-new/2015/05/aws-lambda-supports-uploading-code-from-s3/. I think you still need to create Lambda function, but you can specify where the ZIP is coming from, either uploaded or from S3 bucket. This was just a suggestion to rule out whether it was a problem with your ZIP.

Create a script to zip and use aws-sdk to upload the Lambda and set a timeout > 120 sec (if you don't want to build and locally modify this project, you can try that, maybe it's quicker)

I don't know if creating my own script is a good idea, I'm guessing you are doing a lot of magic things inside serverless-next.js code. Don't you think?

Sure, there are some complex parts in this code, but the problem here seemed isolated to the Lambda creation/upload itself. So the idea was to upload via the aws-sdk so you can see if you can successfully upload with an increased timeout.

maybe you can try to clone and build this repo and update the aws-sdk client timeout in a similar way as serverless/serverless#937? (We should make the change in this component, but it would need a PR).

So the problem is that serverless-next.js doesn't have an option for setting AWS_CLIENT_TIMEOUT but aws-sdk does?
Do you think clone and build this repo is a good idea? How can I easily do it?

Yea, you would need to look at CONTRIBUTING.md. But it may take some time to understand the code, so it is up to you.

I really want to solve this issue already 🙏

Yup, and I want to help too. I was providing some suggestions to help you debug, but it will take a bit of elbow grease.

I think allowing setting an increased aws-sdk timeout will probably help you, and I can work on a PR later this week when I have some time. But I think this is more like a band-aid fix - I still feel that the Lambda upload really shouldn't be taking > 2 minutes in the first place.

Also, one other thing you can try is: useServerlessTraceTarget setting in serverless.xml. This is the same target Vercel uses and could help reduce your dependencies size so it might help you successfully upload.

@oran1248
Copy link
Author

oran1248 commented Sep 20, 2020

@dphang

See here: https://aws.amazon.com/about-aws/whats-new/2015/05/aws-lambda-supports-uploading-code-from-s3/. I think you still need to create Lambda function, but you can specify where the ZIP is coming from, either uploaded or from S3 bucket. This was just a suggestion to rule out whether it was a problem with your ZIP.

I will give it a try.

Also, one other thing you can try is: useServerlessTraceTarget setting in serverless.xml. This is the same target Vercel uses and could help reduce your dependencies size so it might help you successfully upload.

It worked! so it is the bundle size?

I think allowing setting an increased aws-sdk timeout will probably help you, and I can work on a PR later this week when I have some time.

I think this is my best option for now.

@dphang
Copy link
Collaborator

dphang commented Sep 20, 2020

Glad it worked. It may be bundle size but I thought Lambda upload would usually give a failure message saying it is too big. Perhaps it's a bug with the Lambda upload endpoint?

Maybe we can add checks for zip size and add a warning (or even fail the build if we know the zip is over the 50 MB limit)

@oran1248
Copy link
Author

That would be great. Now that we know it's the zip size, there is no option to upload a zip file that is more than 50MB at all? That sounds strange, because as I said I'm sure there are projects that are larger than mine.

@dphang
Copy link
Collaborator

dphang commented Sep 20, 2020

Yup, if you search the issues, a few people had similar issues. For example #141 (comment)

Currently API and pages are their own Lambda@Edge but then each has a 50 MB limit per AWS. So the serverless-trace target support was added, which reduces code size by maintaining one set of dependencies (instead of bundling all dependencies into each page/route). But there were some caveats there, like potentially slower performance in my experience.

I think there was some thought around multiple cache behaviors so there is a lambda for each route but there are some AWS limitations e.g max of 25 behaviors unless you ask for a quota increase from AWS, and also it adds complexity to this component.

Vercel solves this since they have their own custom CDN/routing layer and I believe they split Lambdas (they seem to use regular Lambdas, not Lambda@Edge). Since this component is based on CloudFront/Lambda@Edge, there are more limitations there inherent to AWS.

Anyway, thanks for the good discussion - it gave me a few ideas on improving the documentation and/or code for a better developer experience.

@oran1248
Copy link
Author

@dphang
Thank you for the time and the help! 💪

@dphang
Copy link
Collaborator

dphang commented Sep 20, 2020

Yup, no worries. For completeness sake, I forgot to mention you can also minimize your Next.js build outputs using something like this in next.config.js. It can be used instead of useServerlessTraceTarget. This is what I use for production since I use the regular serverless target: it makes the build take longer but my Lambda ZIP is ~20% smaller. Only caveat is by minimizing perhaps it'll make it harder to debug if you don't have useful error messaging or logging, as you can't refer to line number in source code.

(Use terser-webpack-plugin)

  webpack: (config, { buildId, dev, isServer, defaultLoaders, webpack }) => {
    if (isServer && !dev && process.env.NEXT_MINIMIZE === "true") {
      config.optimization = {
        minimize: true,
        minimizer: [
          new TerserPlugin({
            parallel: true,
            cache: true,
            terserOptions: {
              output: { comments: false },
              mangle: true,
              compress: true,
            },
            extractComments: false,
          }),
        ],
      };
    }

    return config;
  }

I'll close the issue for now and work on improving the docs/code in the near future.

@dphang dphang closed this as completed Sep 20, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants