-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(worker): Add support for concurrent pollers #1132
Conversation
…roup memory constraints
@bergundy Some major changes from the previous pass. Please take another look.. @Sushisource Would you mind taking a second look at changes in the worker-options.ts file? Any runtime constraint I forgot to check? Also, in tests against Temporal Cloud, I had to set both WF and Act pollers to ~60 each to get peak performances for a single thread Worker running on my laptop. That seems way too high to use as a default value, but performance with only 2 pollers (ie. the minimum) were very disappointing. I settled on 10 pollers WF/Act as the default value. Comments? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the option defaults make sense 👍
Just some wording fixes in the docstrings.
Co-authored-by: Spencer Judge <sjudge@hey.com>
Thanks a lot! I need to find some good English spell/grammar checker that knows how to deal with source code, but hopefully, in the mean time, there is peer reviews 😆 |
* Setting this value higher than needed will generally not have negative impact on this Worker's performance; your | ||
* server may however impose a limit on the total number of concurrent Workflow Task pollers. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But it may have a negative impact on the cluster's perf and may result in workflow tasks timing out if they're not processed in a timely fashion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as I could see in perf tests, task timeouts were linked to setting maxConcurrentWorkflowTaskExecutions
too high, not to increasing maxConcurrentWorkflowTaskPolls
. Core won't poll anyway if there is no execution slot available.
Still, I will change the phrasing to mention that too high might be negative on the server cluster.
* `maxConcurrentWorkflowTaskPolls` of `60`. Your millage may vary. | ||
* In some performance test against Temporal Cloud, running with a single Workflow thread and the Reuse V8 Context | ||
* option enabled, we reached peak performance with a `maxConcurrentWorkflowTaskExecutions` of `120`, and | ||
* `maxConcurrentWorkflowTaskPolls` of `60` (worker machine: Apple M2 Max; ping of 74 ms to Temporal Cloud; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wondering what the CPU load was and number of wf/s (not that we need to document that).
Awesome work on this! |
What changed
WorkerOptions
propertiesmaxConcurrentWorkflowTaskPolls
andmaxConcurrentActivityTaskPolls
, which allow controlling the number of pollers to be used to fetch Workflow/Activity tasks from the Task Queue. Properly adjusting these values should allow better filling of the corresponding execution slots.@experimental
tag onreuseV8Context
maxCachedWorkflows
, including using a different formula whenreuseV8Context
is enabled (fix [Bug] DefaultWorkerOptions.maxCachedWorkflows
is too low #838)maxConcurrentWorkflowTaskExecutions
has been reduced to 40 (was previously 100), as higher values increase the risk of Workflow Task Timeouts unless other options are also tuned. This was not problem previously because the single poller was unlikely to fill in all execution slot anyway, so max would rarely reached).WorkerOptions
.