-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question on performance and multiple daemons #4
Comments
You can't run multiple deamons at the same time within the same process due
to how signal handling is done in php. Each deamon would override the
handler of the other and cause mayhem. So, your only option is to have a
single deamon with multiple workers (which makes more sense anyway).
Each worker will take up the resources of a single cpu core (since php is
single threaded) and start with the memory footprint of the parent daemon
(and usually just increase from there). The daemon/worker code in my
library doesn't present any overhead to your workers, except in how it
serializes and queues the return value from a worker. In my experience the
workers aren't moving that fast enough to make this a concern. I have
experienced an issue with my daemon code in that if workers are going too
fast then it causes a problem with the queue and causes it to crash. In the
future I hope I fix this, probably by having a different queuing mechanism
that uses sockets instead of the sysV queue.
If your workers are returning values so quickly that its causing the daemon
to not keep up then you need to re-evaluate how the workers are performing
their jobs. Maybe have a worker do multiple tasks before returning a bulk
result.
…On Sep 17, 2017 3:48 AM, "Kamba Abudu" ***@***.***> wrote:
I have a quick question in relation to the performance of your library,
CPU and memory consumption predominantly.
If I had an application with say 5 background processing channels, for
maximum single process throughput, I would run a daemon per channel.
Alternatively, I could run 5 workers with each worker responsible for a
single channel, processing one payload at a time. What would be the
performance impact of running either scenario using your library, and what
would your recommendation be?
I could benchmark this to determine those stats, but given that you have
used your library in production to run complex daemons, I was hoping to
draw on your experience of it to get a high-level perspective.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#4>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAWa6YxcvrxW96EoM0PL4gTgKyB3g5Bmks5sjM7RgaJpZM4PaElY>
.
|
To clarify, when I said I would run a daemon for each channel for single process throughput, I actually meant I would run each daemon as a separate process, i.e. 5 individual daemons each doing work for their assigned channel with effectively a single worker per daemon. For the use-case I have in mind, I doubt the workers would be moving that fast for it to be a problem, but it has been useful to have your insight, many thanks. |
I have a quick question in relation to the performance of your library, CPU and memory consumption predominantly.
If I had an application with say 5 background processing channels, for maximum single process throughput, I would run a daemon per channel. Alternatively, I could run 5 workers with each worker responsible for a single channel, processing one payload at a time. What would be the performance impact of running either scenario using your library, and what would your recommendation be?
I could benchmark this to determine those stats, but given that you have used your library in production to run complex daemons, I was hoping to draw on your experience of it to get a high-level perspective.
The text was updated successfully, but these errors were encountered: