Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Migrated] Getting Gateway Timeouts (504), Despite Setting timeout_seconds to 120 #916

Closed
jneves opened this issue Feb 20, 2021 · 10 comments

Comments

@jneves
Copy link
Contributor

jneves commented Feb 20, 2021

Originally from: Miserlou/Zappa#2182 by JordanTreDaniel

I have deployed a lambda that performs image manipulation, which can take a while sometimes. Longer than 30 seconds is a normal, expected time (for some photos)

At first, the timeout was showing in the zappa tail logs, but now, after setting the timeout_seconds option to 120, it seems as though the Amazon Cloudfront/Amazon Gateway is still enforcing a 30-second limit. I have dug around on AWS, and it seems I can only change this from a Cloudfront console. I don't believe there is an Cloudfront Distribution for this Lambda.

I am running on python 3.6

Expected Behavior

I would think that if I set the timeout_seconds to 120 in the Zappa settings, zappa would apply that setting to not only the lambda, but also whichever gates stand in front of it. (Gateway OR CloudFront Dist)

Actual Behavior

I get 504's when I don't choose to have the lambda downsample the image. (When it's normal for it to take long)

Possible Fix

Is there a way to apply the timeout_seconds to more than just the Lambda itself?

Steps to Reproduce

  1. Go to http://www.rapclouds.com
  2. Sign In (using oauth, super easy)
  3. Search a song
  4. Go to song
  5. Click blue settings button by word cloud
  6. Pick a "mask" (image to make the wordcloud from)
  7. Try diff values between 3 & 0 for the downSample option.
  8. 0 downsampling is almost a guaranteed timeout exception. (Network tab in devtools to verify)

Your Environment

  • Zappa version used: 0.51.0
  • Operating System and Python version: I use Mac, but this is only a problem on AWS. (python 3.6)
  • The output of pip freeze:
argcomplete==1.12.0
boto3==1.14.31
botocore==1.17.31
certifi==2020.6.20
cfn-flip==1.2.3
chardet==3.0.4
click==7.1.2
cycler==0.10.0
docutils==0.15.2
durationpy==0.5
Flask==1.1.2
future==0.18.2
hjson==3.0.1
idna==2.10
itsdangerous==1.1.0
Jinja2==2.11.2
jmespath==0.10.0
kappa==0.6.0
kiwisolver==1.2.0
MarkupSafe==1.1.1
matplotlib==3.3.0
numpy==1.19.1
Pillow==7.2.0
pip-tools==5.3.0
placebo==0.9.0
pyparsing==2.4.7
python-dateutil==2.6.1
python-slugify==4.0.1
PyYAML==5.3.1
requests==2.24.0
s3transfer==0.3.3
scipy==1.5.3
six==1.15.0
text-unidecode==1.3
toml==0.10.1
tqdm==4.48.0
troposphere==2.6.2
urllib3==1.25.10
Werkzeug==0.16.1
wordcloud==1.7.0
wsgi-request-logger==0.4.6
zappa==0.51.0
 {
	"dev": {
		"app_function": "app.app",
		"aws_region": "us-east-1",
		"profile_name": "default",
		"project_name": "wordcloudflask",
		"runtime": "python3.6",
		"s3_bucket": "word-cloud-bucket",
		"slim_handler": true,
		"cors": true,
		"binary_support": false,
		"timeout_seconds": 120
	}
}```

@flanaman
Copy link

flanaman commented Jun 4, 2021

The API Gateway and Cloudfront timeouts are 30 seconds and cannot be changed. The timeout_seconds setting does have an effect when running async invocations (for example, issuing django manage commands).

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-requirements-limits.html

https://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html

@monkut
Copy link
Collaborator

monkut commented Jul 16, 2022

APIGW has a 30 second timeout.
The zappa configurable timeout is for the lambda function only.

closing.

@monkut monkut closed this as completed Jul 16, 2022
@JacobDel
Copy link

JacobDel commented Feb 7, 2023

What can be done to avoid it? Any workarounds?

@sridhar562345
Copy link
Contributor

sridhar562345 commented Feb 8, 2023

@JacobDel

What can be done to avoid it? Any workarounds?

We can't modify APIGW max timeout, what's your use case and why do you need to run it for more than 30 sec?

@JacobDel
Copy link

JacobDel commented Feb 8, 2023

@sridhar562345
My app makes a lot of api calls (5ish) during bootup (of the app).
I do have "keep_warm" enabled.
As I understand from here, only 1 lambda is warm and thus some of these api calls will need to startup a new lambda.
The startup takes some time (2/3 minutes), because I also use slim_handler.
But I suppose I should find a way to keep multiple lambda warm?

Looking at my tail logs I don't think any lambda is warm: Miserlou/Zappa#2240

@souravjamwal77
Copy link
Collaborator

Hi @JacobDel, I think you can keep the main lambda warm and make those API calls async, Plus to keep lambda warm make those warm calls to an endpoint which doesn't ping your other APIs.

@JacobDel
Copy link

JacobDel commented Feb 9, 2023

Thank you for your response @souravjamwal77 .
How can I choose which endpoints to keep warm?

@souravjamwal77
Copy link
Collaborator

@JacobDel, you can create a seperate dummy endpoint to keep warm. Or whenever you're getting a request check if it's coming a user or it's a keep warm call.

Then, if the call came from AWS do nothing and return a true response.

And if this call is from a user then start hitting your other lambdas asynchronously and load data on the front side asynchronously.
Finally, still if any of your API call fails then try to use zappa's async feature.

@sridhar562345
Copy link
Contributor

@JacobDel

Instead of using a slim handler, why don't you try to deploy your app as a docker image? So your initialization times will be reduced.

@JacobDel
Copy link

@souravjamwal77 I created a timed event like you mentioned but that did not create a difference, even with an expression rate of 1minute.
@sridhar562345 this solved my problem! I also noticed in the logs that the warm calls now actually do something after disabling slim handler.

thank you guys for solving my problem!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants