Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Insufficient configured threads #2503

Closed
marcel91 opened this issue May 3, 2018 · 24 comments
Closed

Insufficient configured threads #2503

marcel91 opened this issue May 3, 2018 · 24 comments

Comments

@marcel91
Copy link

marcel91 commented May 3, 2018

We try to limit the number of threads used by jetty to reduce the memory consumption with

QueuedThreadPool threadPool = new QueuedThreadPool(8, 1);
httpClient.setExecutor(threadPool);

During the initialization of the HttpClient we sometimes get an Exception stating that we configured not enough threads. The problem is that this is not reproducible and also the number of required threads varies:

Caused by: java.lang.IllegalStateException: Insufficient configured threads: required=9 < max=8 for QueuedThreadPool[qtp1658773836]@62dee14c{STARTED,1<=1<=8,i=1,q=0}
at org.eclipse.jetty.util.thread.ThreadPoolBudget.check(ThreadPoolBudget.java:149)
at org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseTo(ThreadPoolBudget.java:130)
at org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseFrom(ThreadPoolBudget.java:175)
at org.eclipse.jetty.io.SelectorManager.doStart(SelectorManager.java:251)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
at org.eclipse.jetty.client.AbstractConnectorHttpClientTransport.doStart(AbstractConnectorHttpClientTransport.java:64)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
at org.eclipse.jetty.client.HttpClient.doStart(HttpClient.java:241)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)

Caused by: java.lang.IllegalStateException: Insufficient configured threads: required=13 < max=8 for QueuedThreadPool[qtp1015743074]@3c8b0262{STARTED,1<=1<=8,i=1,q=0}
at org.eclipse.jetty.util.thread.ThreadPoolBudget.check(ThreadPoolBudget.java:149)
at org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseTo(ThreadPoolBudget.java:130)
at org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseFrom(ThreadPoolBudget.java:175)
at org.eclipse.jetty.io.SelectorManager.doStart(SelectorManager.java:251)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
...

Often we don't have any problems at all. Is there a minimum number of threads that guarantees that this problem is not occurring anymore?

Thanks!
Marcel

@joakime
Copy link
Contributor

joakime commented May 3, 2018

The number of threads you need is determined by the components within Jetty you start to use and the hardware present on your system (the number of cpu cores and network interfaces impacts nio and consequently your thread requirements).

If your goal is to limit memory usage, you don't do that by configuring threads limits.
You do that by limiting connections or requests. (See DoSFilter, QoSFilter, and LowResourcesMonitor).

A default configured Jetty Distribution for HTTP + webapp deployment can operate on an embedded system with 16MB (this is not a typo, yes, I do mean megabytes) of memory serving hundreds of simultaneous clients.
All of the memory usage after that comes from your load (connections, requests, etc) and your application code, not Jetty.

@joakime
Copy link
Contributor

joakime commented May 3, 2018

Seeing as your stacktrace has HttpClient, your memory impact will be determined by number of outstanding connections (there is a connection pool), number of outstanding requests, your hardware again, the types of servers you connect to (servers using HTTP/2 will use more threads then HTTP/1), etc.

Leave the threading alone (default values), consider instead configuring the maxConnectionsPerDestination and maxRequestsQueuedPerDestination values instead.

@joakime
Copy link
Contributor

joakime commented May 3, 2018

Also consider using the streaming APIs on HttpClient and not the whole content APIs.

@sbordet
Copy link
Contributor

sbordet commented May 6, 2018

As HttpClient starts, it also starts its dependent components such as the thread pool, the scheduler, the NIO subsystem, etc.

HttpClient components may need to use threads from the thread pool for their work or internal house keeping that therefore won't be available for normal HttpClient processing.

In your case, likely you have not specified the number of selectors for the HttpClient transport, as explained in the documentation. If you don't, then a heuristic is applied depending on the cores available on the machine.

However, you want to constrain the number of threads in the pool to a number that is too low for the heuristic, and so you have a startup failure as described by the exception.
The fact that the number changes (required=9 or required=13) may be due to races in the start of the components, or to the fact that you reported exception from machines with different number of cores.
The exception is reported immediately, so required=9 means that to start that component you needed at least 9 threads, but that does not means that a thread pool with 9 threads is sufficient, since other components have not yet started and may require additional threads.

If you really want to run with a small number of threads, make sure that you specify explicitly the number of selectors as per documentation linked above, for example using just one selector.
You also want to specify the number of reserved threads in the QueuedThreadPool, say again to one.
Then you can reduce the number of threads in the pool.

Having said that, @joakime is right in that if your goal is to reduce memory usage, you also want to tune many other things before looking at the thread pool.

@sbordet sbordet closed this as completed May 6, 2018
@marcel91
Copy link
Author

marcel91 commented May 7, 2018

@joakime @sbordet Thank you for the advice. I will look into it. I would just like to mention that threads also need a not neglectable amount of (native) memory and therefore it makes sense for us to limit their number.

@marcel91
Copy link
Author

marcel91 commented May 7, 2018

@joakime @sbordet IMHO it is not the best idea to configure default values dependent on the number of cpus on the system, especially if this configuration can stop the client from working at all. Imagine your software is running on a server with hundreds of (maybe virtual) cpus. As far as I understand, in this case, you could run into problems even with the default value of 200 threads.

@sbordet
Copy link
Contributor

sbordet commented May 7, 2018

@marcel91 there are cases where having heuristics based on number of cores are exactly what is needed, and other cases where it is not the best choice.

It is possible to change the defaults if, like in your case, you want to go to the extreme and be in strict control of your system.

HttpClient is non-blocking by default, so it will typically use the minimum amount of threads required to cope with the load. The default connection pool also behaves in the same way, reusing as much as possible existing connections before opening new ones.
Using out-of-the-blue numbers for thread pool sizing without having performed tests may not be the best solution. I would recommend you start with the defaults, apply the load you expect and verify the throughput or latency you want, see how much HttpClient consumes, and only then configure down - if needed.

@marcel91
Copy link
Author

marcel91 commented May 8, 2018

@sbordet I measured the native memory consumption, which is about 1 MB / thread on my system. That's why I wanted to reduce the number of threads. Also I did not experience any performance loss in my scenario when I did. Anyway, with the reduced number of Selectors, everything is working as expected now. :) So, thanks again!

@JigarJoshi
Copy link

JigarJoshi commented Jul 13, 2019

@joakime @sbordet

I have similar situation. I intend to startup a tiny instance of Jetty along side another server. Jetty is mostly used for application management interface and the main business is handled over gRPC server.

I also ran into similar issue where sometimes the Jetty server startup will fail with complain of insufficient threads. I explicitly want to tune the number of threads because default behavior looks for Runtime.getRuntime().availableProcessors() which maps to physically available CPUs and not logical.

This is what I use as Jetty connector configuration

QueuedThreadPool threadPool = new QueuedThreadPool();
threadPool.setMaxThreads(2);
threadPool.setMinThreads(1);
threadPool.setIdleTimeout(60000);
threadPool.setReservedThreads(0);
Server server = new Server(threadPool);
server.setConnectors(new Connector[] { createConnector(address, server) });

and

private AbstractConnector createConnector(InetSocketAddress address, Server server) {

  ServerConnector connector = new ServerConnector(server, 1, 2);
  connector.setHost(address.getHostString());
  connector.setPort(address.getPort());
	for (ConnectionFactory connectionFactory : connector.getConnectionFactories()) {
		if (connectionFactory instanceof HttpConfiguration.ConnectionFactory) {
			((HttpConfiguration.ConnectionFactory) connectionFactory).getHttpConfiguration()
				.setSendServerVersion(false);
		}
	}
  return connector;
  }

Note: above code is just as a reference

My understanding is that there will be 1 acceptors, 2 selector threads allocated and a thread pool (QueuedThreadPool) of initial size 1 thread so total (1+2+1 = 4) threads will be allocated. This is what I want to avoid cost of creating unnecessary native threads.

Problem is:
Only sometime in production environment it fails to start with

stack_trace: j.lang.IllegalStateException: Insufficient configured threads: required=2 < max=2 for QueuedThreadPool[some-name.jetty.1505474932]@59bbb974{STARTED,1<=1<=2,i=1,r=0,q=0}[NO_TRY]
	at o.e.j.u.t.ThreadPoolBudget.check(ThreadPoolBudget.java:155)
	at o.e.j.u.t.ThreadPoolBudget.leaseTo(ThreadPoolBudget.java:129)
	at o.e.j.u.t.ThreadPoolBudget.leaseFrom(ThreadPoolBudget.java:181)
	at o.e.jetty.io.SelectorManager.doStart(SelectorManager.java:255)
	at o.e.j.u.c.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
	at o.e.j.u.c.ContainerLifeCycle.start(ContainerLifeCycle.java:167)
	at o.e.j.u.c.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110)
	at o.e.j.server.AbstractConnector.doStart(AbstractConnector.java:282)
	at o.e.j.s.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:81)
	at o.e.j.server.ServerConnector.doStart(ServerConnector.java:236)
	at o.e.j.u.c.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
	at o.s.b.w.e.jetty.JettyWebServer.start(JettyWebServer.java:146)

runtime information

jetty-9.4.z-SNAPSHOT; built: 2019-06-10T16:30:51.723Z; git: afcf563148970e98786327af5e07c261fda175d3; jvm 1.8.0_161-b12

How can I disable this runtime determination of threads and start up this minimally configured Jetty ?

Thank you for your help

@joakime
Copy link
Contributor

joakime commented Jul 14, 2019

Runtime.getRuntime().availableProcessors() which maps to physically available CPUs and not logical.

That behavior is different depending on your OS and/or JVM version.
See: https://bugs.openjdk.java.net/browse/JDK-6515172

Jetty is mostly used for application management interface and the main business is handled over gRPC server.

Your configured thread pool limits couldn't handle a single static html page with a single javascript file and single css file. (that's 3 threads right there, if the server was accessed from a normal web browser).

QueuedThreadPool threadPool = new QueuedThreadPool();
threadPool.setMaxThreads(2);
threadPool.setMinThreads(1);
threadPool.setIdleTimeout(60000);
threadPool.setReservedThreads(0);

My understanding is that there will be 1 acceptors, 2 selector threads allocated and a thread pool (QueuedThreadPool) of initial size 1 thread so total (1+2+1 = 4) threads will be allocated.

That will create a QueuedThreadPool with 1 pre-initialized thread, and a maximum of 2 threads. (leaving you with 1 unused thread).

stack_trace: j.lang.IllegalStateException: Insufficient configured threads: required=2 < max=2 for QueuedThreadPool[some-name.jetty.1505474932]@59bbb974{STARTED,1<=1<=2,i=1,r=0,q=0}[NO_TRY]

The Insufficient configured threads is a terminal failure.
You must satisfy that minimum in order for Jetty to actually do anything.
The configuration you have presented it so low it would be impossible to actually process a single incoming request. Hence the exception.
You MUST configure for more threads on your configuration.
Note that this check occurs in many places, not just at app startup.
You could, for example, have a successfully started server sit there for hours, days, and then suddenly you get a request that triggers a (used for the first time) Filter to initialize and that Filter init suddenly pushes your Thread Budget over the configured minimum.

This is what I want to avoid cost of creating unnecessary native threads.

The QueuedThreadPool is still a ThreadPool and will keep threads around for reuse.
The unused threads will fall off one at a time based on your QueuedThreadPool idleTimeout (you have it set to 60000ms, aka every 10 minutes, a single idle thread is removed from the QueuedThreadPool).

The configuration you have is painfully low.
If you have even a single user with a web browser hitting a typical web page, set the maximum threads higher, MUCH higher.

Lets use some real world examples.

  1. hit https://github.com/ - that results in 39 resources requested, it took 1200ms to get answers to all of them. The browser used 17 connections to access all of those resources. if we look at those resources that are hosted on github.com we can see that 4 connections are used over that time frame, requesting the base html and 4 javascript resources. (the other 35 resources are on other github.com owned domains). - this single page, from a single browser, would need a QueuedThreadPool max setting of (acceptors + selectors + 4).
  2. hit https://eclipse.org/ - that's 29 resources requested, it took 1400ms to access them all. The browser used 12 connections in total. Of those connecting to eclipse.org, 8 connections were used, requesting 11 resources. - this single page, from a single browser, would need a QueuedThreadPool max setting of (acceptors + selectors + 8).

You need to do the following ...

Set the QueuedThreadPool to default values temporarily.
Hit a few pages on your application management site and look at the connection usages in chrome or firefox.
Then estimate how many users you will have using your site concurrently.
Note: Don't assume that "users" maps to multiple physical people. Even a single user in multiple tabs could impact this.

Now you have a baseline QueuedThreadPool maximum threads to start with.
maximum threads = acceptor threads + selector threads + (concurrent_usage * connections)

But you are not done with your audit yet.

Are you using standard Servlet input/output streams? If so, then you have extra pressure on your ThreadPool. (you will likely need to increase your maximum threads)
Are you using modern Servlet Async I/O? If so, then you have low pressure on your ThreadPool. (you might even be able to lower your maximum threads).

Are you using features that increase your thread budget based on usage? (eg: websocket, dos filters, qos filters, http/2, connection monitoring, etc) If so, then you'll need to increase your maximum threads.
Are you using sub-features that increase your thread budget on usage? (eg: javax.websocket and stream based message handlers) If so, then you'll need to increase your maximum threads.

Another example, developers that use Jetty on Android, with a single user, talking on localhost, via a WebView (a browser on Android) typically have the following configuration - acceptors = 1, selectors = 1, minimum threads = 10, maximum threads = 16, idleTimeout = 30000ms. This will ensure that normal usage from the WebView will not create more threads on the server, and the responsiveness on the WebView is normal.

@JigarJoshi
Copy link

JigarJoshi commented Jul 14, 2019

Thank you @joakime for your reply.

In my case I want to limit # of concurrent requests to 2. Jetty is used to serve REST ful APIs only over traditional servlet (non webflux).

Are you using features that increase your thread budget based on usage? (eg: websocket, dos filters, qos filters, http/2, connection monitoring, etc) If so, then you'll need to increase your maximum threads.

I am using HTTP1.1 and HTTP 2

Are you using sub-features that increase your thread budget on usage? (eg: javax.websocket and stream based message handlers) If so, then you'll need to increase your maximum threads.
I am not using any stream based handler or websocket.

Technically it should be achievable with 2 threaded executor. if the concurrent requests goes above 2 (for example 100) N -2 (for example 98 ) of them should be queued for execution. With some threshold to bound the queue size and start rejecting requests if queue is full, I understand the performance impact of queuing and making them starve for threads. As I mentioned it is used for application management purpose via humans only with avg RPS to 1 with request is very light weight and non compute/io/memory heavy (think of it as static response in form of JSON)

How do I configure jetty at such low footprint?

@sbordet
Copy link
Contributor

sbordet commented Jul 19, 2019

Technically it should be achievable with 2 threaded executor.

Nope, you are assuming wrong things of how Jetty works internally.
If you want to limit concurrent requests, use QoSFilter: https://www.eclipse.org/jetty/documentation/current/qos-filter.html.

@JigarJoshi
Copy link

Technically it should be achievable with 2 threaded executor.

Nope, you are assuming wrong things of how Jetty works internally.
If you want to limit concurrent requests, use QoSFilter: https://www.eclipse.org/jetty/documentation/current/qos-filter.html.

Please point me to the doc that explains the detailed thread modeling of Jetty. If I use QOS filter and limit concurrent requests to 1. will I be able to use Jetty with 2 threads ? Or what is the minimum thread I need to serve 1 concurrent request ?

@sbordet
Copy link
Contributor

sbordet commented Jul 19, 2019

Please explain why you want to configure Jetty with 2 threads.

Instead, leave the configuration at its default, and restrict the concurrency with QoSFilter.
If threads are not needed they won't be created and you will run with a minimum number of threads.
Trying to limit the concurrency by limiting the number of threads is just not the right solution.

@joakime
Copy link
Contributor

joakime commented Jul 19, 2019

Please point me to the doc that explains the detailed thread modeling of Jetty. If I use QOS filter and limit concurrent requests to 1. will I be able to use Jetty with 2 threads ? Or what is the minimum thread I need to serve 1 concurrent request ?

That is a huge topic, and has had multiple blog entries over the years.

Most recent, talking about how threads / requests / and processing models interact - https://webtide.com/eat-what-you-kill-without-starvation/

Threading models in Jetty are adaptive (no you can't configure them), and adjust themselves based on various technology choices and demands the server encounters.

In short, if you are using HTTP/1.1 only, then 2 thread maximum executor is insufficient for even 1 request.
If you are using HTTP/2 (which you have indicated that you are using), then your thread pressures increase exponentially simply because of how HTTP/2 functions.

You have stated 2 goals.

  1. Limit the creation of native threads
  2. Limit the concurrent requests on specific resources

The process to solve for goal 1 is to have a solid minimum threads configuration (and a default level for maximum threads, along with the default unbounded queue below it)
The process to solve for goal 2 is to have QoSFilter configured to protect those specific resources. (not 100% of resources mind you, just specific ones that fit the url-pattern concepts in the servlet spec)

Attempting to solve those goals by setting arbitrarily small maximum thread configurations will not work with a 100% async server like Jetty and modern protocols (like HTTP/2 and websocket).
About 10 years ago, back when Jetty had blocking connectors this kind of threading configuration was possible.
However, since Jetty 9.0.0 was released, this technique of controlling behavior at the threading level no longer a viable option. (you can blame support for Servlet 3.1 Async I/O and HTTP/2 for this change)

alexanderkiel added a commit to samply/blaze that referenced this issue Dec 2, 2021
I was trying to save memory for the metrics server what isn't really
used much. But Jetty needs some minimum number of threads. This number
depends on the number of cores. With a system of 16 cores, 4 threads are
not sufficient.

More: jetty/jetty.project#2503
@jimfcarroll
Copy link

jimfcarroll commented Jan 13, 2023

The number of threads you need is determined by the components within Jetty you start to use and the hardware present on your system (the number of cpu cores and network interfaces impacts nio and consequently your thread requirements).

Which makes Jetty difficult to use as an infrastructure component embedded in a larger system. If an app runs, say, dozens of processes and each contains an http RESTlike endpoint meant to be called from monitoring code (NOT a browser), then EACH Jetty instance thinks is the only one on the platform and REQUIRES an unnecessary number of threads for this particular use case.

The configuration you have presented [max threads 2, min threads 1] so low it would be impossible to actually process a single incoming request.

Only if the client is a browser and what's being served is a web page.

@sbordet
Copy link
Contributor

sbordet commented Jan 13, 2023

Which makes Jetty, supposedly a "lightweight" servlet container, useless as an infrastructure component embedded in a larger system.

That is rude and there is no need to scream with upper cases.

And it turns out major players in the field use Jetty as an infrastructure component embedded in larger systems with great success.

Jetty can be configured with a small number of threads.

Maybe if you rephrase your question politely, instead of insulting, we can help your lack of understanding.

@joakime
Copy link
Contributor

joakime commented Jan 13, 2023

Which makes Jetty, supposedly a "lightweight" servlet container, useless as an infrastructure component embedded in a larger system. If an app runs, say, dozens of processes and each contains an http RESTlike endpoint meant to be called from monitoring code (NOT a browser), then EACH Jetty instance thinks is the only one on the platform and REQUIRES an absurd number of threads for this particular use case.

The choice of technology that you use within Jetty determines your threading demands.

You write using 100% async techniques (processing and I/O), no blocking anything, no Java IO stream usage, HTTP/1.1, you can have a server with under 20 threads using less than 100MB of memory, serving several hundred user agents.
Your OS will use more memory managing the networking connections than Java + Jetty will use.

The minute you choose something like JAXRS / Jersey / etc, you are suddenly using various blocking techniques, or java io streams, all of which puts a demand on your server that require you to increase your resource usage.
Even subtle technology choices like JSON increase your resource demands, as they have fundamental requirements like InputStream to parse, which is blocking. (truly async json parser / generators in java exist, but IIRC are not used in popular REST libraries yet).

The "absurd" statement is also subjective.
We have folks that look at 200 max threads as light weight (overwhelming majority of users are in this territory), and only once you pass 30,000 threads are you in absurd territory (we have several users happily this territory).
The folks that are looking at loom (on the JDK), look at 100,000 threads as "no big deal" and "situation normal".
My first generation Raspberry Pi can run Jetty 11 with 500 threads without breaking a sweat at about 20% CPU usage. (but I don't see the point of pushing that hardware that high, when I can just pick technology choices that improve resource utilization on the server).

@jimfcarroll
Copy link

jimfcarroll commented Jan 13, 2023

My apologies. I was not yelling, but you're right about the overall tone. I edited the relevant phrase. EDIT: also rephrased the "absurd" comment.

@jimfcarroll
Copy link

The choice of technology that you use within Jetty determines your threading demands.

You mentioned it also considers the hardware; but it can't consider the system environment. If I run 100 jetty instances as an embedded infrastructure component and each one thinks it has full access to 80 CPUs and will be serving some public website, then I get an "absurd" (not meant to be provocative) total number of threads running. When I try to limit the threads by providing my own pool, it fails, but only on certain hardware configurations.

@jimfcarroll
Copy link

FWIW, it's an IoT system where there's a container started per device, all managed in Kubernetes with Prometheus as the main monitoring component. I've already addressed all of the "low-hanging" performance improvements and was working to minimize context switches (and even considering CPU pinning some process threads). The Http service is only there to serve a per-container, periodic call from a Prometheus operator so it could live with a single thread and blocking IO.

@joakime
Copy link
Contributor

joakime commented Jan 13, 2023

You mentioned it also considers the hardware; but it can't consider the system environment. If I run 100 jetty instances as an embedded infrastructure component and each one thinks it has full access to 80 CPUs and will be serving some public website, then I get an "absurd" (not meant to be provocative) total number of threads running. When I try to limit the threads by providing my own pool, it fails, but only on certain hardware configurations.

This kind of environment, which is growing in popularity, is typically done in a few main ways.

  1. Careful provisioning and configuration on a per instance technique. - Either in custom code or through the use of jetty-home/jetty-base with --include-dir directives for common configuration.
  2. Containerization - this is your choice, docker, containerd, k8s, etc. All can provision to the container limits on CPU (even partial CPU, like 2.21 CPU), and max memory. Jetty will see these limits imposed on the container. (well, technically, it's the JVM that sees them and makes those values available by the Java API)
  3. A single Jetty instance, but multiple webapps, each with it's own isolated classloader and only answering to specific virtualhosts.

The most number of jetty instances on a single machine I've come across is approaching 600 (something like 580-ish), on a commodity server. This company left their Jetty ThreadPool at default configuration for all instances and have no problems. The minute they start to "tune" or "limit" or "constrain" the ThreadPool they encounter problems.

The Http service is only there to serve a per-container, periodic call from a Prometheus operator so it could live with a single thread and blocking IO.

Jetty is 100% async, it only uses Java NIO, there's no option to run blocking IO natively, only simulated (on top of NIO).
A single thread is a highly unrealistic goal and screams of premature optimization. (a single bad/problematic client can hang your server with this kind of requirement)
Keep in mind that Jetty will use threads as it sees fit, the count of threads active in the ThreadPool will grow to fit demand, and then scale back that count over time back to the minimum thread count. This is key, and explains how the high instance machines manage just fine with that many instances.

@jimfcarroll
Copy link

Thanks. I'll take a look at what you're suggesting.

The Http service is only there to serve a per-container, periodic call from a Prometheus operator so it could live with a single thread and blocking IO.

Jetty is 100% async, it only uses Java NIO, there's no option to run blocking IO natively, only simulated (on top of NIO).

Okay. My point was only that the requirements for the http service are very rudimentary.

@gregw
Copy link
Contributor

gregw commented Jan 15, 2023

Jetty is fully configurable. But when not configured explicitly it will use heuristics based on things like the number of CPUs to set up configuration for thread pools, selectors and reserved threads.

Are these heuristics correct for all deployment scenarios? No. That's why there is explicit configuration available if your deployment is not well served by the heuristics.

The minimum number of threads needed by jetty, if configured correctly, is only a few threads (used to be 3, but I've not checked recently).

As @joakime said, limiting threads is probably dangerous premature optimization. You'd be better off limiting thread demand: configuring minimal selectors, no reserved threads and avoiding blocking applications.

Unless you have a hyper restricted system, concern about number of threads is often misdirected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants