Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The framework default Gunicorn configs do not make sense for Gen2 environments #241

Open
xSAVIKx opened this issue May 12, 2023 · 5 comments
Labels
enhancement New feature or request P3

Comments

@xSAVIKx
Copy link

xSAVIKx commented May 12, 2023

Hey, so we're extensively using the framework to create our GCF and Cloud Run services and it is very easy to use which is great.

But, when it comes to fine-tuning your performance things go rogue. E.g. we have a need for ~5 GB RAM in Cloud Run for some services, so effectively are 2 vCPUs already while the hard-coded gunicorn configs force only a single worker

self.options = {
"bind": "%s:%s" % (host, port),
"workers": 1,
"threads": 1024,
"timeout": 0,
"loglevel": "error",
"limit_request_line": 0,
}

And this is gonna be the same issue with GCF gen2 where you can use bigger instances in terms of memory but they also come with more vCPU for which you are paying but not using.

I suggest we either keep the default and let people configure/adjust Gunicorn params or implement a smart solution for picking appropriate defaults (at least based on the number of vCPU available).

@xSAVIKx
Copy link
Author

xSAVIKx commented May 31, 2023

Any updates?

@HKWinterhalter HKWinterhalter added enhancement New feature or request P3 labels Jun 14, 2023
@HKWinterhalter
Copy link
Contributor

HKWinterhalter commented Jun 14, 2023

Hi @xSAVIKx, thank you for the suggestion. If I understand correctly, your concern is that there will be only a single worker per function instance, which would be a bottleneck and wasteful for higher vCPU counts. You are correct, currently there is no way to tune this for Cloud Functions. I will mark the ability to configure worker count as an Enhancement request.

Other ways you can configure your Function deployment:

The number of instances scale up and down based on need with lower and upper bounds being something you can configure. See: https://cloud.google.com/functions/docs/configuring/min-instances.

The reason the thread setting is 1024 is because the concurrency feature, if turned on, allows many requests to be served by a single instance. See: https://cloud.google.com/functions/docs/configuring/concurrency

Perhaps you could try playing around with the about controls for performance tuning.

As a reminder, Cloud Functions is meant to be a more managed offering with some controls unavailable and vCPU as purposefully an abstraction. If you still need more control, you may find it with Cloud Run - for example, its docs have a section about optimizing Python applications and Gunicorn: https://cloud.google.com/run/docs/tips/python#optimize_gunicorn

Let me know if that helps!

@HKWinterhalter HKWinterhalter added intended behavior and removed enhancement New feature or request P3 labels Jun 14, 2023
@xSAVIKx
Copy link
Author

xSAVIKx commented Jun 14, 2023

Hi @HKWinterhalter. Thanks for the explanations, but please clarify a couple of things.

A Function deployment still has vCPU/RAM resources available to each Functions server (instance), right? Are you saying that if I pick a gen2 GCF with 8 vCPU, it will use all 8 vCPUs with the current Functions Framework setup?

Or are these not really CPUs but allocations of a single CPU with some GHz then?

Here's an example deployment screen from the GCP console:
image

@HKWinterhalter
Copy link
Contributor

After further investigation, your initial understanding was correct. I am marking this and your suggestions as an enhancement request.

(I have also editted my original response as to not confuse future readers).

Thank you!

@xSAVIKx
Copy link
Author

xSAVIKx commented Aug 30, 2023

@HKWinterhalter is it gonna be OK if I create a PR to impelement support of custom Gunicorn configs? Will someone have time to review and release it then?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request P3
Projects
None yet
Development

No branches or pull requests

2 participants