New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Insufficient configured threads #2503
Comments
The number of threads you need is determined by the components within Jetty you start to use and the hardware present on your system (the number of cpu cores and network interfaces impacts nio and consequently your thread requirements). If your goal is to limit memory usage, you don't do that by configuring threads limits. A default configured Jetty Distribution for HTTP + webapp deployment can operate on an embedded system with 16MB (this is not a typo, yes, I do mean megabytes) of memory serving hundreds of simultaneous clients. |
Seeing as your stacktrace has HttpClient, your memory impact will be determined by number of outstanding connections (there is a connection pool), number of outstanding requests, your hardware again, the types of servers you connect to (servers using HTTP/2 will use more threads then HTTP/1), etc. Leave the threading alone (default values), consider instead configuring the maxConnectionsPerDestination and maxRequestsQueuedPerDestination values instead. |
Also consider using the streaming APIs on HttpClient and not the whole content APIs. |
As
In your case, likely you have not specified the number of selectors for the However, you want to constrain the number of threads in the pool to a number that is too low for the heuristic, and so you have a startup failure as described by the exception. If you really want to run with a small number of threads, make sure that you specify explicitly the number of selectors as per documentation linked above, for example using just one selector. Having said that, @joakime is right in that if your goal is to reduce memory usage, you also want to tune many other things before looking at the thread pool. |
@joakime @sbordet IMHO it is not the best idea to configure default values dependent on the number of cpus on the system, especially if this configuration can stop the client from working at all. Imagine your software is running on a server with hundreds of (maybe virtual) cpus. As far as I understand, in this case, you could run into problems even with the default value of 200 threads. |
@marcel91 there are cases where having heuristics based on number of cores are exactly what is needed, and other cases where it is not the best choice. It is possible to change the defaults if, like in your case, you want to go to the extreme and be in strict control of your system.
|
@sbordet I measured the native memory consumption, which is about 1 MB / thread on my system. That's why I wanted to reduce the number of threads. Also I did not experience any performance loss in my scenario when I did. Anyway, with the reduced number of Selectors, everything is working as expected now. :) So, thanks again! |
I have similar situation. I intend to startup a tiny instance of Jetty along side another server. Jetty is mostly used for application management interface and the main business is handled over gRPC server. I also ran into similar issue where sometimes the Jetty server startup will fail with complain of insufficient threads. I explicitly want to tune the number of threads because default behavior looks for Runtime.getRuntime().availableProcessors() which maps to physically available CPUs and not logical. This is what I use as Jetty connector configuration
and
Note: above code is just as a reference My understanding is that there will be 1 acceptors, 2 selector threads allocated and a thread pool (QueuedThreadPool) of initial size 1 thread so total (1+2+1 = 4) threads will be allocated. This is what I want to avoid cost of creating unnecessary native threads. Problem is:
runtime information
How can I disable this runtime determination of threads and start up this minimally configured Jetty ? Thank you for your help |
That behavior is different depending on your OS and/or JVM version.
Your configured thread pool limits couldn't handle a single static html page with a single javascript file and single css file. (that's 3 threads right there, if the server was accessed from a normal web browser).
That will create a QueuedThreadPool with 1 pre-initialized thread, and a maximum of 2 threads. (leaving you with 1 unused thread).
The
The QueuedThreadPool is still a ThreadPool and will keep threads around for reuse. The configuration you have is painfully low. Lets use some real world examples.
You need to do the following ... Set the QueuedThreadPool to default values temporarily. Now you have a baseline QueuedThreadPool maximum threads to start with. But you are not done with your audit yet. Are you using standard Servlet input/output streams? If so, then you have extra pressure on your ThreadPool. (you will likely need to increase your maximum threads) Are you using features that increase your thread budget based on usage? (eg: websocket, dos filters, qos filters, http/2, connection monitoring, etc) If so, then you'll need to increase your maximum threads. Another example, developers that use Jetty on Android, with a single user, talking on localhost, via a WebView (a browser on Android) typically have the following configuration - acceptors = 1, selectors = 1, minimum threads = 10, maximum threads = 16, idleTimeout = 30000ms. This will ensure that normal usage from the WebView will not create more threads on the server, and the responsiveness on the WebView is normal. |
Thank you @joakime for your reply. In my case I want to limit # of concurrent requests to 2. Jetty is used to serve REST ful APIs only over traditional servlet (non webflux).
I am using HTTP1.1 and HTTP 2
Technically it should be achievable with 2 threaded executor. if the concurrent requests goes above 2 (for example 100) N -2 (for example 98 ) of them should be queued for execution. With some threshold to bound the queue size and start rejecting requests if queue is full, I understand the performance impact of queuing and making them starve for threads. As I mentioned it is used for application management purpose via humans only with avg RPS to 1 with request is very light weight and non compute/io/memory heavy (think of it as static response in form of JSON) How do I configure jetty at such low footprint? |
Nope, you are assuming wrong things of how Jetty works internally. |
Please point me to the doc that explains the detailed thread modeling of Jetty. If I use QOS filter and limit concurrent requests to 1. will I be able to use Jetty with 2 threads ? Or what is the minimum thread I need to serve 1 concurrent request ? |
Please explain why you want to configure Jetty with 2 threads. Instead, leave the configuration at its default, and restrict the concurrency with |
That is a huge topic, and has had multiple blog entries over the years. Most recent, talking about how threads / requests / and processing models interact - https://webtide.com/eat-what-you-kill-without-starvation/ Threading models in Jetty are adaptive (no you can't configure them), and adjust themselves based on various technology choices and demands the server encounters. In short, if you are using HTTP/1.1 only, then 2 thread maximum executor is insufficient for even 1 request. You have stated 2 goals.
The process to solve for goal 1 is to have a solid minimum threads configuration (and a default level for maximum threads, along with the default unbounded queue below it) Attempting to solve those goals by setting arbitrarily small maximum thread configurations will not work with a 100% async server like Jetty and modern protocols (like HTTP/2 and websocket). |
I was trying to save memory for the metrics server what isn't really used much. But Jetty needs some minimum number of threads. This number depends on the number of cores. With a system of 16 cores, 4 threads are not sufficient. More: jetty/jetty.project#2503
Which makes Jetty difficult to use as an infrastructure component embedded in a larger system. If an app runs, say, dozens of processes and each contains an http RESTlike endpoint meant to be called from monitoring code (NOT a browser), then EACH Jetty instance thinks is the only one on the platform and REQUIRES an unnecessary number of threads for this particular use case.
Only if the client is a browser and what's being served is a web page. |
That is rude and there is no need to scream with upper cases. And it turns out major players in the field use Jetty as an infrastructure component embedded in larger systems with great success. Jetty can be configured with a small number of threads. Maybe if you rephrase your question politely, instead of insulting, we can help your lack of understanding. |
The choice of technology that you use within Jetty determines your threading demands. You write using 100% async techniques (processing and I/O), no blocking anything, no Java IO stream usage, HTTP/1.1, you can have a server with under 20 threads using less than 100MB of memory, serving several hundred user agents. The minute you choose something like JAXRS / Jersey / etc, you are suddenly using various blocking techniques, or java io streams, all of which puts a demand on your server that require you to increase your resource usage. The "absurd" statement is also subjective. |
My apologies. I was not yelling, but you're right about the overall tone. I edited the relevant phrase. EDIT: also rephrased the "absurd" comment. |
You mentioned it also considers the hardware; but it can't consider the system environment. If I run 100 jetty instances as an embedded infrastructure component and each one thinks it has full access to 80 CPUs and will be serving some public website, then I get an "absurd" (not meant to be provocative) total number of threads running. When I try to limit the threads by providing my own pool, it fails, but only on certain hardware configurations. |
FWIW, it's an IoT system where there's a container started per device, all managed in Kubernetes with Prometheus as the main monitoring component. I've already addressed all of the "low-hanging" performance improvements and was working to minimize context switches (and even considering CPU pinning some process threads). The Http service is only there to serve a per-container, periodic call from a Prometheus operator so it could live with a single thread and blocking IO. |
This kind of environment, which is growing in popularity, is typically done in a few main ways.
The most number of jetty instances on a single machine I've come across is approaching 600 (something like 580-ish), on a commodity server. This company left their Jetty ThreadPool at default configuration for all instances and have no problems. The minute they start to "tune" or "limit" or "constrain" the ThreadPool they encounter problems.
Jetty is 100% async, it only uses Java NIO, there's no option to run blocking IO natively, only simulated (on top of NIO). |
Thanks. I'll take a look at what you're suggesting.
Okay. My point was only that the requirements for the http service are very rudimentary. |
Jetty is fully configurable. But when not configured explicitly it will use heuristics based on things like the number of CPUs to set up configuration for thread pools, selectors and reserved threads. Are these heuristics correct for all deployment scenarios? No. That's why there is explicit configuration available if your deployment is not well served by the heuristics. The minimum number of threads needed by jetty, if configured correctly, is only a few threads (used to be 3, but I've not checked recently). As @joakime said, limiting threads is probably dangerous premature optimization. You'd be better off limiting thread demand: configuring minimal selectors, no reserved threads and avoiding blocking applications. Unless you have a hyper restricted system, concern about number of threads is often misdirected. |
We try to limit the number of threads used by jetty to reduce the memory consumption with
QueuedThreadPool threadPool = new QueuedThreadPool(8, 1);
httpClient.setExecutor(threadPool);
During the initialization of the HttpClient we sometimes get an Exception stating that we configured not enough threads. The problem is that this is not reproducible and also the number of required threads varies:
Caused by: java.lang.IllegalStateException: Insufficient configured threads: required=9 < max=8 for QueuedThreadPool[qtp1658773836]@62dee14c{STARTED,1<=1<=8,i=1,q=0}
at org.eclipse.jetty.util.thread.ThreadPoolBudget.check(ThreadPoolBudget.java:149)
at org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseTo(ThreadPoolBudget.java:130)
at org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseFrom(ThreadPoolBudget.java:175)
at org.eclipse.jetty.io.SelectorManager.doStart(SelectorManager.java:251)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
at org.eclipse.jetty.client.AbstractConnectorHttpClientTransport.doStart(AbstractConnectorHttpClientTransport.java:64)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
at org.eclipse.jetty.client.HttpClient.doStart(HttpClient.java:241)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
Caused by: java.lang.IllegalStateException: Insufficient configured threads: required=13 < max=8 for QueuedThreadPool[qtp1015743074]@3c8b0262{STARTED,1<=1<=8,i=1,q=0}
at org.eclipse.jetty.util.thread.ThreadPoolBudget.check(ThreadPoolBudget.java:149)
at org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseTo(ThreadPoolBudget.java:130)
at org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseFrom(ThreadPoolBudget.java:175)
at org.eclipse.jetty.io.SelectorManager.doStart(SelectorManager.java:251)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
...
Often we don't have any problems at all. Is there a minimum number of threads that guarantees that this problem is not occurring anymore?
Thanks!
Marcel
The text was updated successfully, but these errors were encountered: