Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Need Threadpool minThreads config setting #4819

Closed
benaadams opened this issue Jan 18, 2016 · 16 comments
Closed

Need Threadpool minThreads config setting #4819

benaadams opened this issue Jan 18, 2016 · 16 comments
Assignees
Milestone

Comments

@benaadams
Copy link
Member

ThreadPool.SetMinThreads doesn't exist as an api in coreclr and apparently its available and the responsibility of the hosting model https://github.com/dotnet/coreclr/issues/1606#issuecomment-144126757

So this should be exposed via a config to the dotnet cli

@TheRealPiotrP
Copy link
Contributor

@benaadams is this blocking Kestrel?

@schellap, can you take a look?

@benaadams
Copy link
Member Author

@piotrpMSFT depends, might block how users use Kestrel - if that's not too enigmatic 😁

@halter73 is probably better to comment on it.

@davidfowl
Copy link
Member

Not kestrel, it's server scenarios in general.

@schellap
Copy link

Can you try specifying this as a workaround?

export ComPlus_ThreadPool_ForceMinWorkerThreads=2
set ComPlus_ThreadPool_ForceMinWorkerThreads=2

See ThreadPool_* here:
https://github.com/dotnet/coreclr/blob/master/Documentation/project-docs/clr-configuration-knobs.md

@gkhanna79, the configuration knobs for CoreCLR need to be exposed by the corhost interface or through the PropertyKeys/PropertyValues to coreclr_initialize. Is someone working on this? Or is the host or the user supposed to set this as an env variable for own process?

Cc @ericeil

@benaadams
Copy link
Member Author

@schellap yep that works used

set ComPlus_ThreadPool_ForceMinWorkerThreads=1000

For 1024 of synchronous blocking type connections and it worked happily

@gkhanna79
Copy link
Member

@schellap COMPlus* configuration settings are intended to come from environment variables. However, the question about exposing ThreadPool configuration (either by adding a setting in runtime.config.json or FX API) is something @sergiy-k should comment on.

@gkhanna79 gkhanna79 assigned sergiy-k and unassigned schellap Jan 19, 2016
@benaadams
Copy link
Member Author

@gkhanna79 a config or api approach would be better than environment as its more likely to travel well with the component that has the constraint - whereas environment var feels like it needs to rely on good documentation/process and it being followed well.

If I had to choose only one, config would probably be best; as it can be retro-actively be applied to a deployed app - without recompilation to include the increased threadpool count. (in a something's gone wrong in production panic moment)

@sergiy-k
Copy link

If exposing this setting via runtime config file works for ASP.NET then I would prefer to go with this approach.

@kouvel, could you please take a closer look at this and see if there are any problems with exposing this option in the hosting APIs?
Also please see: https://github.com/dotnet/coreclr/issues/1606

@sergiy-k sergiy-k assigned kouvel and unassigned sergiy-k Jan 20, 2016
@ericeil
Copy link

ericeil commented Jan 20, 2016

In general, the trouble with SetMinThreads is that it depends on the underlying thread pool having a notion of "min threads" in the first place, which not all do.

Which thread pool are we using in CoreRT?

Maybe this would be fine as a config option as long as it was clear that it was configuring the CLR thread pool, and has no effect if a different thread pool is being used (i.e., if the runtime is not CoreCLR).

@schellap
Copy link

For CoreRT we would have a separate set of nativeOptions and for CoreCLR runtimeOptions in the runtime.config.json.

@schellap
Copy link

@kouvel, hopefully we can pass on other CLRConfigValues too (ints and strings) than the current startup flags approach for bool values ex: server_gc.

@kouvel
Copy link
Member

kouvel commented Jan 20, 2016

Yea it should be fine to pass ints and strings as well. I'll take a look.

@benaadams
Copy link
Member Author

is that it depends on the underlying thread pool having a notion of "min threads" in the first place, which not all do.

Are the characteristics of these threadpools known? Will they be subject to jamming issues when too many interdependent things are blocked and their unblockers queued?

@kouvel
Copy link
Member

kouvel commented Feb 8, 2016

CC @ericeil

I just want to bring this up again. Why not add the APIs SetMinThreads and SetMaxThreads?

My thoughts:

  • It seems like the scenario requires this configuration, regardless of which thread pool is being used. The .NET Native thread pool may not support it yet, but if it will be used for server scenarios in the future, wouldn't it be necessary for the .NET Native thread to support it as well? Until then, perhaps it could just throw NotSupportedException.
  • The user would know best how to configure this, based on machine configuration, load distribution, etc. Whether it is through host config or server app config, I guess both would work, but it seems the app would be the natural thing to configure this based on its user-provided configuration.
  • While the thread pool is ideally used for short-running CPU-bound work items, I'm sure that is not the case in practice and can't be expected to be true. The TPL uses the thread pool, and async tasks can include disk or network IO. The app doesn't necessarily know whether a work item is going to be long-running to flag the task as such (depends on network conditions, size of database query results, etc.), and always flagging network tasks as long-running may not be feasible.

So, it seems to me like a fair piece of configuration to support for any thread pool.

@kouvel
Copy link
Member

kouvel commented Feb 8, 2016

Also, not sure if this is done already, but perhaps the thread pool could be improved to track which of its scheduled threads are in a wait state and schedule new threads more quickly, to keep the CPU as busy as possible. When threads go out of the wait state, there may be more threads than is ideal, which are all CPU-bound, but as a heuristic it may work better than scheduling one thread per second. If all scheduled threads are not in a wait state and are long-running, I guess it would be a choice whether to start new threads to release waiting work items (perhaps one per second, or some other heuristic) or to block pending work items until a thread becomes available.

@kouvel
Copy link
Member

kouvel commented Apr 25, 2016

@kouvel kouvel closed this as completed Apr 25, 2016
@msftgits msftgits transferred this issue from dotnet/cli Jan 31, 2020
@msftgits msftgits added this to the Backlog milestone Jan 31, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

9 participants