Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nginx worker_processes = auto is wrong in k8s #199

Closed
Raboo opened this issue Aug 22, 2023 · 3 comments
Closed

nginx worker_processes = auto is wrong in k8s #199

Raboo opened this issue Aug 22, 2023 · 3 comments
Assignees
Labels
🧐 Bug: Needs Confirmation Something isn't working, but needs to be confirmed by a team member.

Comments

@Raboo
Copy link

Raboo commented Aug 22, 2023

Affected Docker Images

All of the fpm-nginx variants.

Current Behavior

When running a kubernetes pod with 500 millicpu (half a core). The container spawns as many nginx workers as my host has where the pod is running. Well actually it doesn't matter what cpu limits you set, it will always try to match the nginx workers with the number of cores the host has.

root@hello-world-84f485b6b4-vwmg4:/etc/php/7.4# ps aux|grep nginx|grep -v grep |wc -l
50

Expected Behavior

Use the number of workers with the same number cores available to the container/pod. Not the not workers per underlying host cores.

Steps To Reproduce

run kubernetes cluster on hosts that has multiple cores.
Create a serversideup/php:8.0-fpm-nginx deployment with a pod that has limit of 1 core.
Check running nginx workers.

Host Operating System

Linux

Docker Version

20.10.14

Anything else?

This should be automatic, but if it can't be automatic, perhaps an env variable that can control the number of workers.

@Raboo Raboo added the 🧐 Bug: Needs Confirmation Something isn't working, but needs to be confirmed by a team member. label Aug 22, 2023
@jaydrogers
Copy link
Member

This issue seems like it might be deeper in the NGINX configuration, correct?

I am providing the defaults of NGINX and the steps to reproduce are pretty vague -- especially for people who don't run K8s everyday.

Are you aware of any NGINX directives that need to be changed?

@jaydrogers
Copy link
Member

Closing for inactivity. Feel free to publish replication steps if you'd like to see this re-opened.

@jaydrogers jaydrogers closed this as not planned Won't fix, can't repro, duplicate, stale Aug 29, 2023
@Raboo
Copy link
Author

Raboo commented Aug 30, 2023

I did write a response. But it's not here. Don't know what happened.. This is not specific to k8s.

"worker_processes auto;" won't work as you think it should inside cgroups-controlled environment (such as docker or lxc or whatelse).

nginx relies on sysconf(_SC_NPROCESSORS_ONLN) call to determine the number of available CPUs to spawn the workers accordingly. Unfortunately in cgroups this does not work and the number of CPUs is not changing whether you define a cpu subset or not.

So it's a cgroup thing, not kubernetes. "auto" will always use the number of cores available to the underlying host wether it's a VM or bare-metal, it won't use the limits set by cgroup or cgroup2.

It's harder to mimic this problem in MacOS since Docker/Podman creates a virtual machine with limited resources for the Docker/Podman container. But in a real prod environment running on Linux you should be able to replicate this issue. Doesn't need to be a k8s cluster. You can run a cheap virtual machine with 2 cores and start a docker container inside it that is limited to use 1 core, you will see two worker processes inside the running container.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🧐 Bug: Needs Confirmation Something isn't working, but needs to be confirmed by a team member.
Projects
None yet
Development

No branches or pull requests

2 participants