Skip to content

Conversation

@nemchik
Copy link
Member

@nemchik nemchik commented Apr 27, 2021

…s from other files

Supersedes #104

Ref: https://git.alpinelinux.org/aports/tree/main/nginx/nginx.conf

Other additions:

  • Auto-generate resolver.conf (only if it doesn't exist) with values from the container's /etc/resolv.conf (helps with k3s/k8s internal dns resolution)
  • Auto-generate worker_processes.conf (only if it doesn't exist) with detected cpu cores
  • Use the pre-defined ffdhe4096 for dhparams.pem per RFC7919

closes #119

@nemchik nemchik requested a review from a team April 27, 2021 14:36
@LinuxServer-CI
Copy link
Contributor

1 similar comment
@LinuxServer-CI
Copy link
Contributor

@aptalca
Copy link
Member

aptalca commented Apr 27, 2021

Also, food for thought. . . Currently we just set the resolver to 127.0.0.11, which is docker's dns service and works great with docker networking. However, k3s/k8s use a different resolver for their networks, requiring modification of all the nginx and proxy confs. Perhaps it would be more prudent to rely on what's listed in /etc/resolv.conf? Both docker and k3s/k8s seem to populate that properly.

Here's a potential implementation: https://serverfault.com/a/638855

@nemchik
Copy link
Member Author

nemchik commented Apr 27, 2021

Also, food for thought. . . Currently we just set the resolver to 127.0.0.11, which is docker's dns service and works great with docker networking. However, k3s/k8s use a different resolver for their networks, requiring modification of all the nginx and proxy confs. Perhaps it would be more prudent to rely on what's listed in /etc/resolv.conf? Both docker and k3s/k8s seem to populate that properly.

Here's a potential implementation: https://serverfault.com/a/638855

Addressed in d77a64a

@LinuxServer-CI
Copy link
Contributor

@maggie44
Copy link

maggie44 commented May 7, 2021

I have read mixed things about setting NGINX worker processes from inside a Docker container with some claiming the 'auto' settings cannot pick up the number of cores, but in my tests setting the worker_processes to auto has worked well. May be worth considering as an alternative to this:

WORKER_PROCESSES=$(wc -w < /sys/fs/cgroup/cpuacct/cpuacct.usage_percpu)

Pulling the latest Alpine container in docker, and installing NGINX with apk add suggests it is also using the auto setting:

# Set number of worker processes automatically based on number of CPU cores.
worker_processes auto;

For reference:

https://nginx.org/en/docs/ngx_core_module.html#worker_processes

"worker_processes – The number of NGINX worker processes (the default is 1). In most cases, running one worker process per CPU core works well, and we recommend setting this directive to auto to achieve that. There are times when you may want to increase this number, such as when the worker processes have to do a lot of disk I/O." - https://www.nginx.com/blog/tuning-nginx/

@LinuxServer-CI
Copy link
Contributor

@aptalca
Copy link
Member

aptalca commented May 14, 2021

@Maggie0002 it is only auto-generated if it doesn't already exist. Then it's up to the user to set it to whatever number they prefer, or auto

@maggie44
Copy link

maggie44 commented May 14, 2021

@Maggie0002 it is only auto-generated if it doesn't already exist. Then it's up to the user to set it to whatever number they prefer, or auto

I mean the default generated by the script would like be better as 'auto' rather than detecting the number of cores itself and echoing the number to a file. Using 'auto' passes off the detection process to NGINX to do, reducing the need for the container scripts to handle the logic. It aligns it with the Alpine container NGINX defaults and recommended settings from NGINX page noted above. It also provides a default setup that is responsive to the changes going on at the hardware level, for example it I were to scale up or down a VPS from 2 cores to 4 cores, after a restart the 'auto' setting would adjust accordingly, whereas the script here would have left me at 2 cores.

The auto-generated file makes sense, but having the new file generated with the default setting of 'auto' would likely be more effective.

@nemchik
Copy link
Member Author

nemchik commented May 14, 2021

@Maggie0002 it is only auto-generated if it doesn't already exist. Then it's up to the user to set it to whatever number they prefer, or auto

I mean the default generated by the script would like be better as 'auto' rather than detecting the number of cores itself and echoing the number to a file. Using 'auto' passes off the detection process to NGINX to do, reducing the need for the container scripts to handle the logic. It aligns it with the Alpine container NGINX defaults and recommended settings from NGINX page noted above. It also provides a default setup that is responsive to the changes going on at the hardware level, for example it I were to scale up or down a VPS from 2 cores to 4 cores, after a restart the 'auto' setting would adjust accordingly, whereas the script here would have left me at 2 cores.

The auto-generated file makes sense, but having the new file generated with the default setting of 'auto' would likely be more effective.

Based on all information we're able to find on this topic, auto does not work correctly in docker.

Ref: nginx/docker-nginx#31

After some more extensive testing I have found that the original proposed solution wc -w < /sys/fs/cgroup/cpuacct/cpuacct.usage_percpu gives the result of all CPU cores the system has, regardless of any cpu pinning or limits you may have placed on the docker container. After additional research I have switched this to nproc which is much simpler and only gives the number of CPU units available from inside the container.

You are correct that our init script would for example set you up with a value of 2 and if you moved the instance to another system with 4 it would not automatically adjust. We are not enforcing that you must keep the original value that we set. You can change it from 2 to auto and if that works for you despite the conversation on the official nginx docker repo, that's great. If not, you are free to set the number to anything you'd like.

Our container has shipped with the default hard coded to 4 (editable by the users) and we have not had users raise issues when they only have a system with 1 or 2 CPUs (i can personally attest to not having problems on a VM with a single CPU, and we have many users running raspberry pi with 2). This automatic detection will likely only benefit users who have a very high number of cores and also put a great deal of load on their system, and who never bother to look at the config on their own. The other benefit to us is having the number set in a file that should be pretty obvious to users who look in their config, and that we are not logging update warnings for.

Anyway we do appreciate you chiming in (got me to take a second look at the original implementation). If you have any additional info I'm happy to read it.

@LinuxServer-CI
Copy link
Contributor

@LinuxServer-CI
Copy link
Contributor

@LinuxServer-CI
Copy link
Contributor

@maggie44
Copy link

Anyway we do appreciate you chiming in (got me to take a second look at the original implementation). If you have any additional info I'm happy to read it.

Of course no problem at all, appreciate the consideration of the change.

I had seen the posts like the one you cited but hadn’t found anything more recent that echos the issue still remains (the cited one is 6 years old). So I did a test a while back myself. As the number set for worker processes starts a process for each, you can see the workers when you look in HTOP on Linux. Worker processes 4 and will be 4 lines for NGINX process, worker processes set to 1 and there will be 1 line for NGINX etc etc. When set to auto in a container it launched the correct number of worker processes based on the number of CPUs, and changed after restart if I scaled the number of cpus on the server.

Interesting though that the previous solution you had there didn’t pick up resource limiting to the container. That’s one thing I didn’t check on the ‘auto’ mode in the container test I mentioned above. Also interesting that the default Alpine NGINX repo uses the auto mode (the file you referenced in your original post) considering they are primarily used as containers and there doesn’t seem to be any concrete guidance from NGINX on it.

That’s my two cents on it. Everything you mentioned above makes sense, setting a worker process on launch is already a step up from having it default to 4. And despite NGINX’s recommendation to use the auto mode, if I recall correctly they still default to a fixed number on their own docker images.

Of course a circumstance where someone would really need to care about how many worker processes they have, is a high demand service and really should be checking this and a lot more manually anyway.

Maybe all this info so far will prove useful to someone else experimenting with it in the future.

@LinuxServer-CI
Copy link
Contributor

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

DH params job failing for several weeks

5 participants