Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

How many processors do I need to allocate to the fstify service in docker/k8? #780

Closed
budarin opened this issue Oct 28, 2022 · 13 comments 路 Fixed by fastify/fastify#4386
Closed
Labels
help wanted Extra attention is needed

Comments

@budarin
Copy link

budarin commented Oct 28, 2022

馃挰 Question here

Fastify uses pino, which is implemented as a worker and therefore requires additional processor time

Based on your experience, how much CPU should be allocated for a service written in fastify?

PS: I haven't found any information about this anywhere

@budarin budarin added the help wanted Extra attention is needed label Oct 28, 2022
@mcollina
Copy link
Member

I normally recommend allocating at least two (virtual) processors per Node.js processes to have the best latency. This advice is not relevant per Fastify or Pino.

@budarin
Copy link
Author

budarin commented Oct 29, 2022

Do I understand correctly that this recommendation concerns a simple application on node.js without worker and worker require an additional CPU?

@Uzlopak
Copy link

Uzlopak commented Oct 29, 2022

They always say that node is single threaded, but node can utilize a second cpu core.

So if you give node a second core, it will run faster than single core.

@budarin
Copy link
Author

budarin commented Oct 29, 2022

Thanks, I realized that for any simple node.js app requires at least 2 virtual processors - I understood that

But my question is related to the use of a worker (and in this case pino): is it necessary to allocate another virtual processor for it, or will two processors be enough for an application on fastify?

@jsumners
Copy link
Member

will two processors be enough for an application on fastify?

You must make your own determination based upon your application's requirements. Review https://www.youtube.com/watch?v=P9csgxBgaZ8

@mcollina
Copy link
Member

The second vcpu will mostly be used by the GC and libuv threadpool. Note that this is to minimize the latency for your users as well as memory usage as the GC fan run more frequently. The main thread won't have to stop to let the GC run.

It's totally fine to run node with 1 vcpu too.

@L2jLiga
Copy link
Member

L2jLiga commented Oct 29, 2022

It's totally fine to run node with 1 vcpu too.

I can confirm that, for most of my application 1vCPU was enough and for some of them (e.g. API gateway) we've allocated less, about 100m-200m cpu in kubernetes

@kibertoad
Copy link
Member

@mcollina This whole thread is useful information that deserves to be preserved. Can I create a PR to fastify, even though it is accurate not just for fastify?

@mcollina
Copy link
Member

Sure thing, go ahead!

@kibertoad
Copy link
Member

@mcollina Would you say that running pino in a separate worker thread affects the recommendation in any way, or "1 vCPU when optimizing for a throughtput, 2 vCPU when optimizing for latency" is still a good rule of a thumb?

@mcollina
Copy link
Member

The new main reason to have 2 vpcus is to have one where the V8 gc could run.
Pino would hardly matter in that case.

note that pino won鈥檛 use a worker_thread by default, so this recommendation depends heavily on what the pino transport does.

@kibertoad
Copy link
Member

@mcollina If we assume the pino transport simply writes to stdout (as this is the most popular option in k8s world), would using a worker-thread be recommended? And how would that affect the recommendation?

@mcollina
Copy link
Member

If you just aim to write to standard out, don't use a pino transport.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants