Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(nginx): use 1 worker process #10901

Closed
wants to merge 1 commit into from
Closed

Conversation

uhthomas
Copy link

@uhthomas uhthomas commented Apr 9, 2024

The value 'auto' will create a worker process for each CPU thread, which can be problematic in containerised environments with CPU limits. A single NGINX worker process should be more than sufficient.

Fixes: #10898

The value 'auto' will create a worker process for each CPU thread, which
can be problematic in containerised environments with CPU limits. A
single NGINX worker process should be more than sufficient.

Fixes: blakeblackshear#10898
Copy link

netlify bot commented Apr 9, 2024

Deploy Preview for frigate-docs ready!

Name Link
🔨 Latest commit dda0733
🔍 Latest deploy log https://app.netlify.com/sites/frigate-docs/deploys/66152e51b1a38100097275d2
😎 Deploy Preview https://deploy-preview-10901--frigate-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@NickM-27
Copy link
Sponsor Collaborator

NickM-27 commented Apr 9, 2024

this is not correct. Nginx recommends this config and we specifically added it versions ago to improve hls and other issues that we were seeing #4688

https://www.cloudbees.com/blog/tuning-nginx#worker-threads

@NickM-27 NickM-27 closed this Apr 9, 2024
@uhthomas
Copy link
Author

uhthomas commented Apr 9, 2024

this is not correct. Nginx recommends this config and we specifically added it versions ago to improve hls and other issues that we were seeing #4688

What is not correct about it? Could you please refer me to the issues users were seeing with a single worker process? NGINX is very well optimised and unlikely to ever be the bottleneck for an application like this.

I hope you can agree that 128 NGINX processes for a container with a single CPU core is undesirable? What would you propose instead?

@NickM-27
Copy link
Sponsor Collaborator

NickM-27 commented Apr 9, 2024

what is not correct is your statement that one should be more than enough. It isn't, especially when using HLS. a lot of time was spent improving and optimizing performance for nginx like I said.

I hope you can agree that 128 NGINX processes for a container with a single CPU core is undesirable? What would you propose instead?

you should just bind mount a custom nginx config that works for your specific use case, as this does not match the use case of the majority of frigate users

@uhthomas
Copy link
Author

uhthomas commented Apr 9, 2024

what is not correct is your statement that one should be more than enough. It isn't, especially when using HLS. a lot of time was spent improving and optimizing performance for nginx like I said.

I hope you can agree that 128 NGINX processes for a container with a single CPU core is undesirable? What would you propose instead?

you should just bind mount a custom nginx config that works for your specific use case, as this does not match the use case of the majority of frigate users

Could you please refer me to issues or benchmarks which show that a single worker process with 1024 connections is not enough? Many other edits were made in #4688 which will have helped and I am not sure changing the number of worker processes will have played a large role in that. As mentioned in the associated issue, a large number of worker processes will actually be detrimental to performance because of CPU throttling and resource contention... To reiterate, this is not an uncommon deployment.

@NickM-27
Copy link
Sponsor Collaborator

NickM-27 commented Apr 9, 2024

Many other edits were made in #4688 which will have helped and I am not sure changing the number of worker processes will have played a large role in that

I spent multiple days testing, benchmarking, etc. each of the changes and some even ended up being reverted because they did not help. It seems like you have decided any prior knowledge, experience, etc. of the frigate maintainers is irrelevant and the change should just be made because it doesn't make sense in your particular use case.

To reiterate, this is not an uncommon deployment.

in terms of frigate users it is, the vast majority of frigate users run on thin intel clients / nucs not enterprise cpus.

For now I would suggest making the change locally, in the future this may be made more easily configurable along with other things in nginx and perhaps the default value can change based on signals that frigate detects during startup (like total hardware thread count)

@felipecrs
Copy link
Contributor

felipecrs commented Apr 13, 2024

If nginx is attempting to spawn 128 worker threads in a container with --cpus 1, it means nginx probably isn't cgroups v1 or v2 aware, and instead rely on information from /proc.

@uhthomas for your specific case, you may consider deploying your docker containers with lxcfs (which emulates /proc inside the container to match the cgroups limits): https://github.com/lxc/lxcfs?tab=readme-ov-file#using-with-docker

If that's really the case, a better fix would be automatically infer the number of CPUs (based on cgroups, not /proc) during initialization and inject this into nginx config (or pass in as a flag when starting the nginx process).

@uhthomas
Copy link
Author

@felipecrs Thanks for your insight. I agree, nginx should probably be cgroup aware. That said, I don't see an open issue for it and unaware of how long it will take to trickle down to this project.

I am deploying in Kubernetes, so I don't think lxcfs can help here. Even a smaller number of default workers like 2 or 4 would be a huge improvement IMO.

I don't see a way to set the number of worker threads for the nginx process with the Frigate container at all. Guidance would be appreciated.

@NickM-27
Copy link
Sponsor Collaborator

NickM-27 commented Apr 13, 2024

Like Felipe described, a good fit would be that before starting nginx the S6 process reads the number of available CPUs from cgroups and updates the nginx configuration using something like sed

created #10969 to capture this idea

@felipecrs
Copy link
Contributor

I am deploying in Kubernetes, so I don't think lxcfs can help here. Even a smaller number of default workers like 2 or 4 would be a huge improvement IMO.

It's actually even easier to use lxcfs in k8s than in docker. I use it everyday at work:

https://artifacthub.io/packages/helm/lxcfs-on-kubernetes/lxcfs-on-kubernetes

After installing the chart, all it takes it to label the namespace. After that, all pods deployed in the namespace will already profit from /proc emulation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Use 1 nginx worker process
3 participants