New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
馃 What is the purpose of prefork? #180
Comments
Thanks for opening your first issue here! 馃帀 Be sure to follow the issue template! |
Prefork enables use of the SO_REUSEPORT socket option, which is available in newer versions of many operating systems, including DragonFly BSD and Linux (kernel version 3.9 and later). This socket option allows multiple sockets to listen on the same IP address and port combination. The kernel then load balances incoming connections across the sockets. SO_REUSEPORT scales perfectly when a lot of concurrent client connections (at least thousands) are established over the real network (preferrably 10Gbit with per-CPU hardware packet queues). When small number of concurrent client connections are established over localhost, then SO_REUSEPORT usually doesn't give any performance gain. Benchmarks where preforking is enabled. NGINX on socket sharding |
Thanks for that explanation. Do you have evidence to suggest a single Go process cannot support "[thousands] of concurrent client connections" without SO_REUSEPORT? |
A single go process can easily support thousands of concurrent connections. Preforking makes use of single go processes but will load balance connections on OS level. It's up to you if preforking has an advantage for your web app, we only provide the experimental option to enable it. Feel free to re-open this issue if you have further questions! |
So prefork runs multiple worker processes? |
Does using fiber behind a reverse proxy like nginx reduce the possible benefit by doing the same thing or is there still possibly a benefit? I assume there are fewer tcp connections between the reverse proxy and fiber. |
When fiber prefork is active, database automigrate works in every process, how can I make it work only once? |
https://docs.gofiber.io/api/fiber#ischild if fiber.isChild() == false {
// make it work only once
} |
Thanks @ReneWerner87 |
I also have a question about sharing memory. We have implemented a FastCache instance to use for our Fiber application. When application starts, the data is pulled from an external server with a HTTP get request and the cache is updated. So far so good as both child processes gets their own update. We also have a feature to update the cache by pushing it through HTTP POST from the external server to make it not require a server restart on Fiber application's side. Will it update every process? I assume it will not. My questions are,
If we can share cache between forks within the application it should be a solution. But after the research I don't see it's possible. Sharing memory is also beneficial as it reduces the memory requirements as we need only one instance for all the processes. |
well thats the problem with inmemory caches when you want to use multiple threads to 1. you have to inform all threads to do this, for this there are concepts like message queues or pub/sub mechanisms i personally use redis and the pub/sub concept for this purpose to 2. and 3. i think no, but you loose the benefit of processing over several threads |
Question description
What is the intent behind the prefork option?
The text was updated successfully, but these errors were encountered: