-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use 1 nginx worker process #10898
Comments
The value 'auto' will create a worker process for each CPU thread, which can be problematic in containerised environments with CPU limits. A single NGINX worker process should be more than sufficient. Fixes: blakeblackshear#10898
There is a lot of documentation on how to improve nginx performance, this is part of the recommended config and we specifically added the config a couple versions ago when focusing on nginx / hls performance https://www.cloudbees.com/blog/tuning-nginx#worker-threads |
also to be clear, for your specific needs you can just bind mount an edited nginx config to change it in your container |
This is not an edge case, I would imagine a majority of deployments of Frigate will be in containerised environments. Each worker process can handle 1024 connections - which is plenty for any self hosted application. Surely the conservative approach is to ask enterprise users which need performance tuning to mount custom NGINX configurations and not the majority of users? As asked in the associated PR, how would you propose this be fixed? It is actually detrimental to performance to have 128 NGINX worker processes when there is only a single CPU thread available because of cgroups. |
I am not referring to containerized environments. I am referring to using large scale enterprise CPUs but only passing in a single CPU core so there is an inbalance. For now you can fix it for yourself as I already said. Long term we may make the nginx config dynamically defined instead of statically defined at which point it would be easier to configure without using docker bind mounts |
This is not a problem just for enterprise CPUs either. There are plenty of consumer CPUs with 24, 32 and even 64 threads available. It's good practice to assign CPU/memory limits to containers to have a stable QoS and prevent global resource contention. I feel like I've been reasonable in asking if you agree that a worker process for each thread is overkill, what alternative suggestions you may have and some benchmarks or context for the change to the number of worker processes? Single threaded performance is far better than it was years ago and I don't believe the blanket advice of one worker per thread is reasonable or best practice these days. As mentioned before NGINX is very well optimised and can easily server tens of thousands of requests per second on a single core without a sweat. I don't see any reason to use more than a single worker process for any deployment other than a huge enterprise, in which case they should be the ones doing special configuration and performance tuning... |
I answered further on the PR, probably best to keep having the conversation in one place instead of two |
Describe what you are trying to accomplish and why in non technical terms
I am running Frigate in Kubernetes on an EPYC 7763, which has 128 threads. The NGINX config will spawn one NGINX worker per thread. The problem is the container is only assigned one CPU.
https://github.com/blakeblackshear/frigate/blob/8163c036ef5e3f2e0db4364726b5ce8b1110816c/docker/main/rootfs/usr/local/nginx/conf/nginx.conf#L3C1-L3C23
Describe the solution you'd like
Instead of spawning hundreds of NGINX workers, there should be only one. I am sceptical that anyone will ever need more than a single NGINX worker.
Describe alternatives you've considered
It could be configurable... somehow? I am not sure the effort to make that happen is worth it.
Additional context
N/A
The text was updated successfully, but these errors were encountered: