Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Server becomes unresponsive after sitting idle from a load spike #3550

Closed
tholu opened this issue Apr 15, 2019 · 33 comments · Fixed by #3586
Closed

Server becomes unresponsive after sitting idle from a load spike #3550

tholu opened this issue Apr 15, 2019 · 33 comments · Fixed by #3586
Assignees
Labels
Bug For general bugs on Jetty side High Priority Sponsored This issue affects a user with a commercial support agreement

Comments

@tholu
Copy link

tholu commented Apr 15, 2019

I tested Jetty 9.4.16 over the weekend with quite some load, although if's currently only available on Maven central and not officially been announced.

After a few hours, Jetty became unresponsive and I got 502 errors from my reverse proxy before it. I downgraded to Jetty 9.4.15 again and it seems to be running fine.

I really can't pinpoint what might cause this behavior, as it only happened after a few hours - but repeatedly. I use Glowroot to monitor the application and I found nothing obvious in the threaddump either. Out of ideas how to debug such situations.

Can somebody reproduce this? Any ideas how to further debug this?

@sbordet
Copy link
Contributor

sbordet commented Apr 15, 2019

When it's unresponsive, please take a JVM thread dump and a Jetty server dump.

Using TLS?

What OS/JDK?

@tholu
Copy link
Author

tholu commented Apr 15, 2019

Debian 4.9.0-6-amd64 #1 SMP Debian 4.9.88-1+deb9u1 (2018-05-07) x86_64 GNU/Linux
OpenJDK Runtime Environment (build 1.8.0_212-8u212-b01-1~deb9u1-b01)

No TLS. Thanks for the hint about the Jetty server dump, I'll revisit the issue as soon I can reproduce this.

@sbordet
Copy link
Contributor

sbordet commented Apr 15, 2019

I strongly recommend that you shy away from any JDK provided by Debian.

Please use official versions from Oracle or adoptopenjdk.net, which are known to be A) stable and B) pass the TCK.
Otherwise we could be wondering why it fails and then figure out it was a badly merged backport or some Debian modification that introduced a problem.

The latest official JDK 8 release is 8u202. You are using some 8u212 with who knows what code in it and who knows how stable that is.

Using standard OpenJDK binaries would allow us to possibly replicate the issue.

@tholu
Copy link
Author

tholu commented Apr 15, 2019

@sbordet Thanks for the recommendation & explanation! I stumbled upon AdoptOpenJDK anyway and am planning to migrate to it (from 8 to 11 as well).

@joakime
Copy link
Contributor

joakime commented Apr 15, 2019

I think 8u212 is a release outside of the Oracle Java 8 public support window.
The last public Oracle Java 8 release was 8u202.
Being an open source project, it would be very difficult to support something that isn't public.

@tholu
Copy link
Author

tholu commented Apr 15, 2019

I will migrate to AdoptOpenJDK 11, and try Jetty 9.4.16 (or whatever version is released by then), and see if the issue can be reproduced there. Until then I'll close this issue. Thanks!

@tholu tholu closed this as completed Apr 15, 2019
@joakime
Copy link
Contributor

joakime commented Apr 15, 2019

Looks like https://adoptopenjdk.net/upstream.html lists 8u212 as Early Access (EA) is that right?

@e-hubert-opti
Copy link

After upgrading our test system from Jetty 9.4.15 to Jetty 9.4.16 we seem to observe the same issue.
OS: CentOS 6.6
Java: OpenJDK 11.0.2
We did not yet investigate the issue any further (e.g. using JVM thread dumps and Jetty server dumps) and also did not try out 9.4.17, but will do it once we find the time (likely tomorrow). Just wanted to let you guys know about it. As this ticket is already closed, we would then likely create a new ticket if we have more details at hand.

@tholu
Copy link
Author

tholu commented Apr 23, 2019

@e-hubert Thank you for reporting this! I'll reopen the issue, so you can add your findings here. We have yet to migrate to AdoptOpenJDK to contribute more to solving this.

@tholu tholu reopened this Apr 23, 2019
@ptribble
Copy link

We're also seeing this issue, or something like it, on one of our systems, just updated to 9.4.17 (from 9.4.11, I think).

In our case we're running illumos (Solaris) with openjdk8u212, but I don't think it's the openjdk that's the problem - we've been running that openjdk for a week with the older jetty without any problem, but jetty 9.4.17 immediately started to fail.

@joakime
Copy link
Contributor

joakime commented Apr 23, 2019

For those having issues, can you please enable the QueuedThreadPool.setDetailedDump(true) and perform a server dump when you experience this issue?
Then include that dump detail here, in this issue?

For those running jetty-home (or the older jetty-distribution) ...

  • enable the property jetty.threadPool.detailedDump=true
  • enable the jmx module
  • use any JMX console (jmc or jconsole) and perform a Server dump() operation when the server enters the state mentioned.

@cddude229
Copy link

I've experienced something similar starting today as well. 9.4.14 -> 9.4.16 upgrade, Java 8, Ubuntu 18.04. I'll try to get a Server.dump() shortly. Working on proving my theory below and will take it if my theory proves true. :)

As far as I can tell, the lockup (in my case) is caused when ReservedThreadExecutor.capacity > QueuedThreadPool.minThreads. When my service runs with capacity=8 and minThreads=10, it works correctly. But when I set capacity=16, then I notice any thread mentioning ReservedThread.run disappears from thread dumps over a short time. (We choose reserved capacity by using the heuristic, in case that's relevant.)

After doing some diving into code changes, I believe it is one of the changes introduced by one of these commits (which is new in 9.4.16): a982422 1097354 5177abb 9cda1ce

In particular, in 9.4.14 (and 9.4.15) we only shrink QTP compared to maxThreads, not compared to minThreads. This was changed in 1097354#diff-2572dae5bd06448110dfdfe5038ca01dL785. My gut tells me that minThreads is actually closer to the right logic, but I think it's producing incorrect behavior by removing reserved threads. Maybe it should be < minThreads + reservedThreads? (In my setup, maxThreads is 250, so it's unlikely to have ever stopped the threads before.)

@cddude229
Copy link

cddude229 commented Apr 23, 2019

I believe I've confirmed my theory; I increased instance size (so reserved > minThreads) and it instantly locked up. Queue size continued to grow while locked up:
image

Here's the dump() (redacted classpath besides jetty; rest is unredacted):

$>run dump
#calling operation dump of mbean org.eclipse.jetty.server:id=0,type=server
#operation returns:
Server@f6c48ac{STARTED}[9.4.16.v20190411] - STARTED
+= QueuedThreadPool[qtp1433867275]@5577140b{STARTED,10<=10<=250,i=0,q=1}[ReservedThreadExecutor@7791a895{s=0/16,p=0}] - STARTED
|  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
|  += ReservedThreadExecutor@7791a895{s=0/16,p=0} - STARTED
|  +> threads size=10
|     +> 27 qtp1433867275-27 SELECTING RUNNABLE @ sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
|     +> 25 qtp1433867275-25 SELECTING RUNNABLE @ sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
|     +> 28 qtp1433867275-28 SELECTING RUNNABLE @ sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
|     +> 32 qtp1433867275-32-acceptor-1@3e0fe68f-ServerConnector@31dc339b{HTTP/1.1,[http/1.1]}{0.0.0.0:23668} ACCEPTING RUNNABLE @ sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
|     +> 30 qtp1433867275-30 SELECTING RUNNABLE @ sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
|     +> 26 qtp1433867275-26 SELECTING RUNNABLE @ sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
|     +> 23 qtp1433867275-23 SELECTING RUNNABLE @ sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
|     +> 24 qtp1433867275-24 SELECTING RUNNABLE @ sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
|     +> 29 qtp1433867275-29 SELECTING RUNNABLE @ sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
|     +> 31 qtp1433867275-31-acceptor-0@1a6804d1-ServerConnector@31dc339b{HTTP/1.1,[http/1.1]}{0.0.0.0:23668} ACCEPTING BLOCKED @ sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:234)
+= ScheduledExecutorScheduler@52d455b8{STARTED} - STARTED
|  +> sun.misc.Unsafe.park(Native Method)
|  +> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
|  +> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
|  +> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
|  +> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
|  +> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
|  +> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
|  +> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
|  +> java.lang.Thread.run(Thread.java:748)
+= GzipHandler@73846619{STARTED} - STARTED
|  += HandlerCollection@5649fd9b{STARTED} - STARTED
|  |  += ContextHandlerCollection@6adede5{STARTED} - STARTED
|  |  |  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
|  |  |  += o.e.j.w.WebAppContext@13c27452{Glass,/glass,file:///usr/sumo/glass-20.1-13709/webapps/glass/,AVAILABLE}{/glass} - STARTED
|  |  |     += org.eclipse.jetty.server.session.SessionHandler2000584752==dftMaxIdleSec=1800 - STARTED
|  |  |     |  += ConstraintSecurityHandler@6b861a26{STARTED} - STARTED
|  |  |     |  |  +- org.eclipse.jetty.security.DefaultAuthenticatorFactory@44e31522
|  |  |     |  |  += ServletHandler@a0b8f79{STARTED} - STARTED
|  |  |     |  |  |  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
|  |  |     |  |  |  +> listeners ServletHandler@a0b8f79{STARTED} size=4
|  |  |     |  |  |  |  +> ListenerHolder@37a963d7{STARTED}: org.eclipse.jetty.servlet.listener.ELContextCleaner - STARTED
|  |  |     |  |  |  |  +> ListenerHolder@41d1c2a9{STARTED}: org.eclipse.jetty.servlet.listener.IntrospectorCleaner - STARTED
|  |  |     |  |  |  |  +> ListenerHolder@706b1132{STARTED}: org.apache.logging.log4j.web.Log4jServletContextListener - STARTED
|  |  |     |  |  |  |  +> ListenerHolder@635887e2{STARTED}: com.sumologic.assembly.webapp.MetricsEnabledDoOrDieContextLoaderListener - STARTED
|  |  |     |  |  |  +> filters ServletHandler@a0b8f79{STARTED} size=3
|  |  |     |  |  |  |  +> glassAuthFilter@1f13f785==com.sumologic.glass.GlassAuthFilter,inst=true,async=false - STARTED
|  |  |     |  |  |  |  |  +> com.sumologic.glass.GlassAuthFilter@5018ba44
|  |  |     |  |  |  |  +> log4jServletFilter@78d2c3f4==org.apache.logging.log4j.web.Log4jServletFilter,inst=true,async=false - STARTED
|  |  |     |  |  |  |  |  +> org.apache.logging.log4j.web.Log4jServletFilter@5f922c9b
|  |  |     |  |  |  |  +> dynamicTracingFilter@5fbd4733==com.sumologic.webutil.filter.DynamicTracingFilter,inst=true,async=false - STARTED
|  |  |     |  |  |  |     +> com.sumologic.webutil.filter.DynamicTracingFilter@4a621c1a
|  |  |     |  |  |  +> filterMappings ServletHandler@a0b8f79{STARTED} size=3
|  |  |     |  |  |  |  +> [/*]/[]/[REQUEST]=>glassAuthFilter
|  |  |     |  |  |  |  +> [/*]/[]/[REQUEST, INCLUDE, ERROR, FORWARD]=>log4jServletFilter
|  |  |     |  |  |  |  +> [/*]/[]/[REQUEST]=>dynamicTracingFilter
|  |  |     |  |  |  +> servlets ServletHandler@a0b8f79{STARTED} size=4
|  |  |     |  |  |  |  +> default@5c13d641==org.eclipse.jetty.servlet.DefaultServlet,jsp=null,order=0,inst=true,async=false - STARTED
|  |  |     |  |  |  |  |  +> org.eclipse.jetty.servlet.DefaultServlet@26c22b11
|  |  |     |  |  |  |  |  +> initParams size=9
|  |  |     |  |  |  |  |     +> dirAllowed=false
|  |  |     |  |  |  |  |     +> maxCacheSize=256000000
|  |  |     |  |  |  |  |     +> maxCachedFileSize=200000000
|  |  |     |  |  |  |  |     +> welcomeServlets=false
|  |  |     |  |  |  |  |     +> useFileMappedBuffer=true
|  |  |     |  |  |  |  |     +> acceptRanges=true
|  |  |     |  |  |  |  |     +> etags=false
|  |  |     |  |  |  |  |     +> maxCachedFiles=2048
|  |  |     |  |  |  |  |     +> redirectWelcome=false
|  |  |     |  |  |  |  +> jsp@19c47==org.eclipse.jetty.servlet.NoJspServlet,jsp=null,order=0,inst=true,async=false - STARTED
|  |  |     |  |  |  |  |  +> org.eclipse.jetty.servlet.NoJspServlet@2071ae61
|  |  |     |  |  |  |  |  +> initParams size=5
|  |  |     |  |  |  |  |     +> fork=false
|  |  |     |  |  |  |  |     +> compilerSourceVM=1.7
|  |  |     |  |  |  |  |     +> logVerbosityLevel=DEBUG
|  |  |     |  |  |  |  |     +> compilerTargetVM=1.7
|  |  |     |  |  |  |  |     +> xpoweredBy=false
|  |  |     |  |  |  |  +> display@63a518c2==org.springframework.web.servlet.DispatcherServlet,jsp=null,order=1,inst=true,async=false - STARTED
|  |  |     |  |  |  |  |  +> org.springframework.web.servlet.DispatcherServlet@42f8f6f3
|  |  |     |  |  |  |  +> kubernetesProxy@9e6f2ac8==com.sumologic.glass.proxy.KubernetesProxyServlet,jsp=null,order=3,inst=true,async=false - STARTED
|  |  |     |  |  |  |     +> com.sumologic.glass.proxy.KubernetesProxyServlet@68f65c80
|  |  |     |  |  |  +> servletMappings ServletHandler@a0b8f79{STARTED} size=5
|  |  |     |  |  |     +> [/]=>default
|  |  |     |  |  |     +> [*.jsp, *.jspf, *.jspx, *.xsp, *.JSP, *.JSPF, *.JSPX, *.XSP]=>jsp
|  |  |     |  |  |     +> [/json/*, /api/json/*]=>display
|  |  |     |  |  |     +> [/proxy/kubernetes/*]=>kubernetesProxy
|  |  |     |  |  |     +> [/*]=>default
|  |  |     |  |  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
|  |  |     |  |  +> roles size=1
|  |  |     |  |  |  +> java.util.concurrent.CopyOnWriteArraySet@0(size=0)
|  |  |     |  |  +> constraints size=1
|  |  |     |  |     +> java.util.HashMap$EntrySet@a37740c0(size=1)
|  |  |     |  |        +: /={TRACE={RoleInfo,F,C[],None}, TRACE.omission={RoleInfo[],None}}
|  |  |     |  += org.eclipse.jetty.server.session.DefaultSessionCache@36bf28d2[evict=-1,removeUnloadable=false,saveOnCreate=false,saveOnInactiveEvict=false] - STARTED
|  |  |     |  |  += org.eclipse.jetty.server.session.NullSessionDataStore@5eab49e8[passivating=false,graceSec=3600] - STARTED
|  |  |     |  |  |  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
|  |  |     |  |  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
|  |  |     |  +~ DefaultSessionIdManager@3f29e26{STARTED}[worker=node0] - STARTED
|  |  |     |  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
|  |  |     += ErrorPageErrorHandler@73a65712{STARTED} - STARTED
|  |  |     |  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
|  |  |     +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
|  |  |     +> WebAppClassLoader=Glass@22eeefeb
|  |  |     |  +> URLs size=384
|  |  |     |  |  +> file:/usr/sumo/glass-20.1-13709/webapps/glass/WEB-INF/classes/
|  |  |     |  |  +> file:/usr/sumo/glass-20.1-13709/webapps/glass/WEB-INF/[redacted]
|  |  |     |  +> startJarLoader@28c97a5
|  |  |     +> Systemclasses o.e.j.w.WebAppContext@13c27452{Glass,/glass,file:///usr/sumo/glass-20.1-13709/webapps/glass/,AVAILABLE}{/glass} size=15
|  |  |     |  +> java.
|  |  |     |  +> javax.
|  |  |     |  +> org.eclipse.jetty.continuation.
|  |  |     |  +> org.eclipse.jetty.jaas.
|  |  |     |  +> org.eclipse.jetty.jmx.
|  |  |     |  +> org.eclipse.jetty.jndi.
|  |  |     |  +> org.eclipse.jetty.jsp.JettyJspServlet
|  |  |     |  +> org.eclipse.jetty.servlet.DefaultServlet
|  |  |     |  +> org.eclipse.jetty.servlets.PushCacheFilter
|  |  |     |  +> org.eclipse.jetty.servlets.PushSessionCacheFilter
|  |  |     |  +> org.eclipse.jetty.util.annotation.
|  |  |     |  +> org.eclipse.jetty.util.log.
|  |  |     |  +> org.eclipse.jetty.websocket.
|  |  |     |  +> org.w3c.
|  |  |     |  +> org.xml.
|  |  |     +> Serverclasses o.e.j.w.WebAppContext@13c27452{Glass,/glass,file:///usr/sumo/glass-20.1-13709/webapps/glass/,AVAILABLE}{/glass} size=18
|  |  |     |  +> -org.eclipse.jetty.alpn.
|  |  |     |  +> -org.eclipse.jetty.apache.
|  |  |     |  +> -org.eclipse.jetty.continuation.
|  |  |     |  +> -org.eclipse.jetty.jaas.
|  |  |     |  +> -org.eclipse.jetty.jmx.
|  |  |     |  +> -org.eclipse.jetty.jndi.
|  |  |     |  +> -org.eclipse.jetty.jsp.
|  |  |     |  +> -org.eclipse.jetty.server.session.SessionData
|  |  |     |  +> -org.eclipse.jetty.servlet.DefaultServlet
|  |  |     |  +> -org.eclipse.jetty.servlet.NoJspServlet
|  |  |     |  +> -org.eclipse.jetty.servlet.listener.
|  |  |     |  +> -org.eclipse.jetty.servlets.
|  |  |     |  +> -org.eclipse.jetty.util.annotation.
|  |  |     |  +> -org.eclipse.jetty.util.log.
|  |  |     |  +> -org.eclipse.jetty.websocket.
|  |  |     |  +> org.eclipse.jdt.
|  |  |     |  +> org.eclipse.jetty.
|  |  |     |  +> org.objectweb.asm.
|  |  |     +> Configurations o.e.j.w.WebAppContext@13c27452{Glass,/glass,file:///usr/sumo/glass-20.1-13709/webapps/glass/,AVAILABLE}{/glass} size=5
|  |  |     |  +> org.eclipse.jetty.webapp.WebInfConfiguration@773ef8c3
|  |  |     |  +> org.eclipse.jetty.webapp.WebXmlConfiguration@2df6b8f8
|  |  |     |  +> org.eclipse.jetty.webapp.MetaInfConfiguration@2968bf62
|  |  |     |  +> org.eclipse.jetty.webapp.FragmentConfiguration@7df0b188
|  |  |     |  +> org.eclipse.jetty.webapp.JettyWebXmlConfiguration@7d88a615
|  |  |     +> Handler attributes o.e.j.w.WebAppContext@13c27452{Glass,/glass,file:///usr/sumo/glass-20.1-13709/webapps/glass/,AVAILABLE}{/glass} size=2
|  |  |     |  +> javax.servlet.context.tempdir=/usr/sumo/temp/glass/jetty-0.0.0.0-23668-glass-_glass-any-4217001236733563270.dir
|  |  |     |  +> org.eclipse.jetty.server.Executor=QueuedThreadPool[qtp1433867275]@5577140b{STARTED,10<=10<=250,i=0,q=1}[ReservedThreadExecutor@7791a895{s=0/16,p=0}]
|  |  |     +> Context attributes o.e.j.w.WebAppContext@13c27452{Glass,/glass,file:///usr/sumo/glass-20.1-13709/webapps/glass/,AVAILABLE}{/glass} size=7
|  |  |     |  +> org.eclipse.jetty.util.DecoratedObjectFactory=org.eclipse.jetty.util.DecoratedObjectFactory[decorators=1]
|  |  |     |  +> org.apache.logging.log4j.spi.LoggerContext.INSTANCE=org.apache.logging.log4j.core.LoggerContext@6155d082
|  |  |     |  +> org.springframework.web.context.support.ServletContextScope=org.springframework.web.context.support.ServletContextScope@65a1f024
|  |  |     |  +> org.springframework.web.context.WebApplicationContext.ROOT=Root WebApplicationContext: startup date [Tue Apr 23 16:27:11 PDT 2019]; root of context hierarchy
|  |  |     |  +> resourceCache=ResourceCache[null,org.eclipse.jetty.servlet.DefaultServlet@26c22b11]@157208669
|  |  |     |  +> org.apache.logging.log4j.web.Log4jWebSupport.INSTANCE=org.apache.logging.log4j.web.Log4jWebInitializerImpl@71da4c0f
|  |  |     |  +> org.springframework.web.servlet.FrameworkServlet.CONTEXT.display=WebApplicationContext for namespace 'display-servlet': startup date [Tue Apr 23 16:27:28 PDT 2019]; parent: Root WebApplicationContext
|  |  |     +> Initparams o.e.j.w.WebAppContext@13c27452{Glass,/glass,file:///usr/sumo/glass-20.1-13709/webapps/glass/,AVAILABLE}{/glass} size=5
|  |  |        +> log4jConfiguration=classpath:glass_log4j2.xml
|  |  |        +> log4j2.contextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
|  |  |        +> org.eclipse.jetty.servlet.SessionCookie=SUMOGLASSID
|  |  |        +> org.eclipse.jetty.servlet.SessionURL=none
|  |  |        +> log4j.shutdownHookEnabled=false
|  |  += DefaultHandler@64bfbc86{STARTED} - STARTED
|  |  |  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
|  |  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
|  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
+= DeploymentManager@6e1ec318{STARTED} - STARTED
|  +~ WebAppProvider@23e028a9{STARTED} - STARTED
|  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
+= ServerConnector@31dc339b{HTTP/1.1,[http/1.1]}{0.0.0.0:23668} - STARTED
|  +~ Server@f6c48ac{STARTED}[9.4.16.v20190411] - STARTED
|  +~ QueuedThreadPool[qtp1433867275]@5577140b{STARTED,10<=10<=250,i=0,q=1}[ReservedThreadExecutor@7791a895{s=0/16,p=0}] - STARTED
|  +~ ScheduledExecutorScheduler@52d455b8{STARTED} - STARTED
|  +- org.eclipse.jetty.io.ArrayByteBufferPool@3631667d
|  += HttpConnectionFactory@6ea12c19[HTTP/1.1] - STARTED
|  |  +- HttpConfiguration@7481775c{32768/8192,16384/8192,http://:-1,[]}
|  |  |  +> customizers size=0
|  |  |  +> formEncodedMethods size=2
|  |  |  |  +> POST
|  |  |  |  +> PUT
|  |  |  +> outputBufferSize=32768
|  |  |  +> outputAggregationSize=8192
|  |  |  +> requestHeaderSize=16384
|  |  |  +> responseHeaderSize=8192
|  |  |  +> headerCacheSize=4096
|  |  |  +> secureScheme=http
|  |  |  +> securePort=-1
|  |  |  +> idleTimeout=-1
|  |  |  +> blockingTimeout=-1
|  |  |  +> sendDateHeader=true
|  |  |  +> sendServerVersion=false
|  |  |  +> sendXPoweredBy=false
|  |  |  +> delayDispatchUntilContent=true
|  |  |  +> persistentConnectionsEnabled=true
|  |  |  +> maxErrorDispatches=10
|  |  |  +> minRequestDataRate=0
|  |  |  +> minResponseDataRate=0
|  |  |  +> cookieCompliance=RFC6265
|  |  |  +> setRequestCookieCompliance=RFC6265
|  |  |  +> notifyRemoteAsyncErrors=true
|  |  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
|  += SelectorManager@ServerConnector@31dc339b{HTTP/1.1,[http/1.1]}{0.0.0.0:23668} - STARTED
|  |  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
|  |  += ManagedSelector@296eda6c{STARTED} id=0 keys=0 selected=0 updates=0 - STARTED
|  |  |  += EatWhatYouKill@3e024667/SelectorProducer@4f4a0e4/PRODUCING/p=false/QueuedThreadPool[qtp1433867275]@5577140b{STARTED,10<=10<=250,i=0,q=1}[ReservedThreadExecutor@7791a895{s=0/16,p=0}][pc=0,pic=0,pec=0,epc=0]@2019-04-23T16:29:09.267-07:00 - STARTED
|  |  |  |  +- SelectorProducer@4f4a0e4
|  |  |  |  +~ QueuedThreadPool[qtp1433867275]@5577140b{STARTED,10<=10<=250,i=0,q=1}[ReservedThreadExecutor@7791a895{s=0/16,p=0}] - STARTED
|  |  |  |  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
|  |  |  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
|  |  |  +> updates @ 2019-04-23T16:29:09.264-07:00 size=0
|  |  |  +> keys @ 2019-04-23T16:29:09.265-07:00 size=0
|  |  += ManagedSelector@6f9de548{STARTED} id=1 keys=1 selected=0 updates=0 - STARTED
|  |  |  += EatWhatYouKill@2a8ff32/SelectorProducer@5a8f777/PRODUCING/p=false/QueuedThreadPool[qtp1433867275]@5577140b{STARTED,10<=10<=250,i=0,q=1}[ReservedThreadExecutor@7791a895{s=0/16,p=0}][pc=0,pic=0,pec=0,epc=0]@2019-04-23T16:29:09.268-07:00 - STARTED
|  |  |  |  +- SelectorProducer@5a8f777
|  |  |  |  +~ QueuedThreadPool[qtp1433867275]@5577140b{STARTED,10<=10<=250,i=0,q=1}[ReservedThreadExecutor@7791a895{s=0/16,p=0}] - STARTED
|  |  |  |  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
|  |  |  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
|  |  |  +> updates @ 2019-04-23T16:29:09.268-07:00 size=0
|  |  |  +> keys @ 2019-04-23T16:29:09.268-07:00 size=1
|  |  |     +> SelectionKey@5a74e860{i=0}->null
|  |  += ManagedSelector@6c2dec7f{STARTED} id=2 keys=0 selected=0 updates=0 - STARTED
|  |  |  += EatWhatYouKill@4a6198a7/SelectorProducer@2e06a7e4/PRODUCING/p=false/QueuedThreadPool[qtp1433867275]@5577140b{STARTED,10<=10<=250,i=0,q=1}[ReservedThreadExecutor@7791a895{s=0/16,p=0}][pc=0,pic=0,pec=0,epc=0]@2019-04-23T16:29:09.269-07:00 - STARTED
|  |  |  |  +- SelectorProducer@2e06a7e4
|  |  |  |  +~ QueuedThreadPool[qtp1433867275]@5577140b{STARTED,10<=10<=250,i=0,q=1}[ReservedThreadExecutor@7791a895{s=0/16,p=0}] - STARTED
|  |  |  |  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
|  |  |  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
|  |  |  +> updates @ 2019-04-23T16:29:09.269-07:00 size=0
|  |  |  +> keys @ 2019-04-23T16:29:09.269-07:00 size=0
|  |  += ManagedSelector@1c3577ec{STARTED} id=3 keys=0 selected=0 updates=0 - STARTED
|  |  |  += EatWhatYouKill@6b928c79/SelectorProducer@29c3dd33/PRODUCING/p=false/QueuedThreadPool[qtp1433867275]@5577140b{STARTED,10<=10<=250,i=0,q=1}[ReservedThreadExecutor@7791a895{s=0/16,p=0}][pc=0,pic=0,pec=0,epc=0]@2019-04-23T16:29:09.269-07:00 - STARTED
|  |  |  |  +- SelectorProducer@29c3dd33
|  |  |  |  +~ QueuedThreadPool[qtp1433867275]@5577140b{STARTED,10<=10<=250,i=0,q=1}[ReservedThreadExecutor@7791a895{s=0/16,p=0}] - STARTED
|  |  |  |  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
|  |  |  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
|  |  |  +> updates @ 2019-04-23T16:29:09.269-07:00 size=0
|  |  |  +> keys @ 2019-04-23T16:29:09.269-07:00 size=0
|  |  += ManagedSelector@53a15397{STARTED} id=4 keys=0 selected=0 updates=0 - STARTED
|  |  |  += EatWhatYouKill@6fa1dc0c/SelectorProducer@74037f9b/PRODUCING/p=false/QueuedThreadPool[qtp1433867275]@5577140b{STARTED,10<=10<=250,i=0,q=1}[ReservedThreadExecutor@7791a895{s=0/16,p=0}][pc=0,pic=0,pec=0,epc=0]@2019-04-23T16:29:09.27-07:00 - STARTED
|  |  |  |  +- SelectorProducer@74037f9b
|  |  |  |  +~ QueuedThreadPool[qtp1433867275]@5577140b{STARTED,10<=10<=250,i=0,q=1}[ReservedThreadExecutor@7791a895{s=0/16,p=0}] - STARTED
|  |  |  |  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
|  |  |  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
|  |  |  +> updates @ 2019-04-23T16:29:09.269-07:00 size=0
|  |  |  +> keys @ 2019-04-23T16:29:09.269-07:00 size=0
|  |  += ManagedSelector@53964b0e{STARTED} id=5 keys=0 selected=0 updates=0 - STARTED
|  |  |  += EatWhatYouKill@b0dc426/SelectorProducer@3c10ba31/PRODUCING/p=false/QueuedThreadPool[qtp1433867275]@5577140b{STARTED,10<=10<=250,i=0,q=1}[ReservedThreadExecutor@7791a895{s=0/16,p=0}][pc=0,pic=0,pec=0,epc=0]@2019-04-23T16:29:09.27-07:00 - STARTED
|  |  |  |  +- SelectorProducer@3c10ba31
|  |  |  |  +~ QueuedThreadPool[qtp1433867275]@5577140b{STARTED,10<=10<=250,i=0,q=1}[ReservedThreadExecutor@7791a895{s=0/16,p=0}] - STARTED
|  |  |  |  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
|  |  |  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
|  |  |  +> updates @ 2019-04-23T16:29:09.27-07:00 size=0
|  |  |  +> keys @ 2019-04-23T16:29:09.27-07:00 size=0
|  |  += ManagedSelector@50db0b17{STARTED} id=6 keys=0 selected=0 updates=0 - STARTED
|  |  |  += EatWhatYouKill@5db05003/SelectorProducer@32200389/PRODUCING/p=false/QueuedThreadPool[qtp1433867275]@5577140b{STARTED,10<=10<=250,i=0,q=1}[ReservedThreadExecutor@7791a895{s=0/16,p=0}][pc=0,pic=0,pec=0,epc=0]@2019-04-23T16:29:09.27-07:00 - STARTED
|  |  |  |  +- SelectorProducer@32200389
|  |  |  |  +~ QueuedThreadPool[qtp1433867275]@5577140b{STARTED,10<=10<=250,i=0,q=1}[ReservedThreadExecutor@7791a895{s=0/16,p=0}] - STARTED
|  |  |  |  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
|  |  |  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
|  |  |  +> updates @ 2019-04-23T16:29:09.27-07:00 size=0
|  |  |  +> keys @ 2019-04-23T16:29:09.27-07:00 size=0
|  |  += ManagedSelector@67668cf7{STARTED} id=7 keys=0 selected=0 updates=0 - STARTED
|  |     += EatWhatYouKill@57e9dd1f/SelectorProducer@6e1134e1/PRODUCING/p=false/QueuedThreadPool[qtp1433867275]@5577140b{STARTED,10<=10<=250,i=0,q=1}[ReservedThreadExecutor@7791a895{s=0/16,p=0}][pc=0,pic=0,pec=0,epc=0]@2019-04-23T16:29:09.27-07:00 - STARTED
|  |     |  +- SelectorProducer@6e1134e1
|  |     |  +~ QueuedThreadPool[qtp1433867275]@5577140b{STARTED,10<=10<=250,i=0,q=1}[ReservedThreadExecutor@7791a895{s=0/16,p=0}] - STARTED
|  |     |  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
|  |     +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
|  |     +> updates @ 2019-04-23T16:29:09.27-07:00 size=0
|  |     +> keys @ 2019-04-23T16:29:09.27-07:00 size=0
|  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
|  +- sun.nio.ch.ServerSocketChannelImpl[/0:0:0:0:0:0:0:0:23668]
|  +- qtp1433867275-31-acceptor-0@1a6804d1-ServerConnector@31dc339b{HTTP/1.1,[http/1.1]}{0.0.0.0:23668}
|  +- qtp1433867275-32-acceptor-1@3e0fe68f-ServerConnector@31dc339b{HTTP/1.1,[http/1.1]}{0.0.0.0:23668}
+= LowResourceMonitor@21213b92{STARTED} - STARTED
|  +- Check if the ThreadPool from monitored connectors are lowOnThreads and if all connections number is higher than the allowed maxConnection
|  +- All connections number is higher than the allowed maxConnection
|  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
+- org.eclipse.jetty.jmx.MBeanContainer@55d56113
|  +> java.util.concurrent.ConcurrentHashMap$EntrySetView@2a1661fb(size=59)
|     +: {}=java.util.concurrent:type=concurrenthashmap,id=0
|     +: qtp1433867275-32-acceptor-1@3e0fe68f-ServerConnector@31dc339b{HTTP/1.1,[http/1.1]}{0.0.0.0:23668}=org.eclipse.jetty.server:context=HTTP/1.1@31dc339b,type=abstractconnector$acceptor,id=1
|     +: ManagedSelector@296eda6c{STARTED} id=0 keys=0 selected=0 updates=0=org.eclipse.jetty.io:context=HTTP/1.1@31dc339b,type=managedselector,id=0
|     +: ReservedThreadExecutor@7791a895{s=0/16,p=0}=org.eclipse.jetty.util.thread:type=reservedthreadexecutor,id=0
|     +: SelectorManager@ServerConnector@31dc339b{HTTP/1.1,[http/1.1]}{0.0.0.0:23668}=org.eclipse.jetty.server:context=HTTP/1.1@31dc339b,type=serverconnector$serverconnectormanager,id=0
|     +: DeploymentManager@6e1ec318{STARTED}=org.eclipse.jetty.deploy:type=deploymentmanager,id=0
|     +: org.eclipse.jetty.server.session.SessionHandler2000584752==dftMaxIdleSec=1800=org.eclipse.jetty.server.session:context=glass,type=sessionhandler,id=0
|     +: o.e.j.w.WebAppContext@13c27452{Glass,/glass,file:///usr/sumo/glass-20.1-13709/webapps/glass/,AVAILABLE}{/glass}=org.eclipse.jetty.webapp:context=glass,type=webappcontext,id=0
|     +: SelectorProducer@4f4a0e4=org.eclipse.jetty.io:context=HTTP/1.1@31dc339b,type=managedselector$selectorproducer,id=0
|     +: ManagedSelector@67668cf7{STARTED} id=7 keys=0 selected=0 updates=0=org.eclipse.jetty.io:context=HTTP/1.1@31dc339b,type=managedselector,id=7
|     +: ManagedSelector@53964b0e{STARTED} id=5 keys=0 selected=0 updates=0=org.eclipse.jetty.io:context=HTTP/1.1@31dc339b,type=managedselector,id=5
|     +: SelectorProducer@74037f9b=org.eclipse.jetty.io:context=HTTP/1.1@31dc339b,type=managedselector$selectorproducer,id=4
|     +: EatWhatYouKill@2a8ff32/SelectorProducer@5a8f777/PRODUCING/p=false/QueuedThreadPool[qtp1433867275]@5577140b{STARTED,10<=10<=250,i=0,q=1}[ReservedThreadExecutor@7791a895{s=0/16,p=0}][pc=0,pic=0,pec=0,epc=0]@2019-04-23T16:29:09.271-07:00=org.eclipse.jetty.util.thread.strategy:context=HTTP/1.1@31dc339b,type=eatwhatyoukill,id=1
|     +: GzipHandler@73846619{STARTED}=org.eclipse.jetty.server.handler.gzip:type=gziphandler,id=0
|     +: ConstraintSecurityHandler@6b861a26{STARTED}=org.eclipse.jetty.security:context=glass,type=constraintsecurityhandler,id=0
|     +: SelectorProducer@3c10ba31=org.eclipse.jetty.io:context=HTTP/1.1@31dc339b,type=managedselector$selectorproducer,id=5
|     +: ErrorHandler@71423665{STARTED}=org.eclipse.jetty.server.handler:type=errorhandler,id=0
|     +: SelectorProducer@32200389=org.eclipse.jetty.io:context=HTTP/1.1@31dc339b,type=managedselector$selectorproducer,id=6
|     +: Check if the ThreadPool from monitored connectors are lowOnThreads and if all connections number is higher than the allowed maxConnection=org.eclipse.jetty.server:type=lowresourcemonitor$connectorsthreadpoollowresourcecheck,id=0
|     +: EatWhatYouKill@b0dc426/SelectorProducer@3c10ba31/PRODUCING/p=false/QueuedThreadPool[qtp1433867275]@5577140b{STARTED,10<=10<=250,i=0,q=1}[ReservedThreadExecutor@7791a895{s=0/16,p=0}][pc=0,pic=0,pec=0,epc=0]@2019-04-23T16:29:09.272-07:00=org.eclipse.jetty.util.thread.strategy:context=HTTP/1.1@31dc339b,type=eatwhatyoukill,id=5
|     +: EatWhatYouKill@6fa1dc0c/SelectorProducer@74037f9b/PRODUCING/p=false/QueuedThreadPool[qtp1433867275]@5577140b{STARTED,10<=10<=250,i=0,q=1}[ReservedThreadExecutor@7791a895{s=0/16,p=0}][pc=0,pic=0,pec=0,epc=0]@2019-04-23T16:29:09.272-07:00=org.eclipse.jetty.util.thread.strategy:context=HTTP/1.1@31dc339b,type=eatwhatyoukill,id=4
|     +: org.eclipse.jetty.util.log.Log@4e1d422d=org.eclipse.jetty.util.log:type=log,id=0
|     +: sun.nio.ch.ServerSocketChannelImpl[/0:0:0:0:0:0:0:0:23668]=sun.nio.ch:context=HTTP/1.1@31dc339b,type=serversocketchannelimpl,id=0
|     +: LowResourceMonitor@21213b92{STARTED}=org.eclipse.jetty.server:type=lowresourcemonitor,id=0
|     +: EatWhatYouKill@5db05003/SelectorProducer@32200389/PRODUCING/p=false/QueuedThreadPool[qtp1433867275]@5577140b{STARTED,10<=10<=250,i=0,q=1}[ReservedThreadExecutor@7791a895{s=0/16,p=0}][pc=0,pic=0,pec=0,epc=0]@2019-04-23T16:29:09.272-07:00=org.eclipse.jetty.util.thread.strategy:context=HTTP/1.1@31dc339b,type=eatwhatyoukill,id=6
|     +: ErrorPageErrorHandler@73a65712{STARTED}=org.eclipse.jetty.servlet:context=glass,type=errorpageerrorhandler,id=0
|     +: ManagedSelector@53a15397{STARTED} id=4 keys=0 selected=0 updates=0=org.eclipse.jetty.io:context=HTTP/1.1@31dc339b,type=managedselector,id=4
|     +: HttpConnectionFactory@6ea12c19[HTTP/1.1]=org.eclipse.jetty.server:context=HTTP/1.1@31dc339b,type=httpconnectionfactory,id=0
|     +: DefaultHandler@64bfbc86{STARTED}=org.eclipse.jetty.server.handler:type=defaulthandler,id=0
|     +: qtp1433867275-31-acceptor-0@1a6804d1-ServerConnector@31dc339b{HTTP/1.1,[http/1.1]}{0.0.0.0:23668}=org.eclipse.jetty.server:context=HTTP/1.1@31dc339b,type=abstractconnector$acceptor,id=0
|     +: Server@f6c48ac{STARTED}[9.4.16.v20190411]=org.eclipse.jetty.server:type=server,id=0
|     +: org.eclipse.jetty.security.DefaultAuthenticatorFactory@44e31522=org.eclipse.jetty.security:context=glass,type=defaultauthenticatorfactory,id=0
|     +: org.eclipse.jetty.server.session.NullSessionDataStore@5eab49e8[passivating=false,graceSec=3600]=org.eclipse.jetty.server.session:context=glass,type=nullsessiondatastore,id=0
|     +: HouseKeeper@507d20bb{STARTED}[interval=600000, ownscheduler=false]=org.eclipse.jetty.server.session:type=housekeeper,id=0
|     +: org.eclipse.jetty.jmx.MBeanContainer@55d56113=org.eclipse.jetty.jmx:type=mbeancontainer,id=0
|     +: EatWhatYouKill@4a6198a7/SelectorProducer@2e06a7e4/PRODUCING/p=false/QueuedThreadPool[qtp1433867275]@5577140b{STARTED,10<=10<=250,i=0,q=1}[ReservedThreadExecutor@7791a895{s=0/16,p=0}][pc=0,pic=0,pec=0,epc=0]@2019-04-23T16:29:09.272-07:00=org.eclipse.jetty.util.thread.strategy:context=HTTP/1.1@31dc339b,type=eatwhatyoukill,id=2
|     +: ServerConnector@31dc339b{HTTP/1.1,[http/1.1]}{0.0.0.0:23668}=org.eclipse.jetty.server:context=HTTP/1.1@31dc339b,type=serverconnector,id=0
|     +: ContextHandlerCollection@6adede5{STARTED}=org.eclipse.jetty.server.handler:type=contexthandlercollection,id=0
|     +: WebAppProvider@23e028a9{STARTED}=org.eclipse.jetty.deploy.providers:type=webappprovider,id=0
|     +: org.eclipse.jetty.io.ArrayByteBufferPool@3631667d=org.eclipse.jetty.io:context=HTTP/1.1@31dc339b,type=arraybytebufferpool,id=0
|     +: ManagedSelector@50db0b17{STARTED} id=6 keys=0 selected=0 updates=0=org.eclipse.jetty.io:context=HTTP/1.1@31dc339b,type=managedselector,id=6
|     +: HandlerCollection@5649fd9b{STARTED}=org.eclipse.jetty.server.handler:type=handlercollection,id=0
|     +: ManagedSelector@6c2dec7f{STARTED} id=2 keys=0 selected=0 updates=0=org.eclipse.jetty.io:context=HTTP/1.1@31dc339b,type=managedselector,id=2
|     +: DefaultSessionIdManager@3f29e26{STARTED}[worker=node0]=org.eclipse.jetty.server.session:type=defaultsessionidmanager,id=0
|     +: ManagedSelector@6f9de548{STARTED} id=1 keys=1 selected=0 updates=0=org.eclipse.jetty.io:context=HTTP/1.1@31dc339b,type=managedselector,id=1
|     +: ManagedSelector@1c3577ec{STARTED} id=3 keys=0 selected=0 updates=0=org.eclipse.jetty.io:context=HTTP/1.1@31dc339b,type=managedselector,id=3
|     +: HttpConfiguration@7481775c{32768/8192,16384/8192,http://:-1,[]}=org.eclipse.jetty.server:context=HTTP/1.1@31dc339b,type=httpconfiguration,id=0
|     +: SelectorProducer@5a8f777=org.eclipse.jetty.io:context=HTTP/1.1@31dc339b,type=managedselector$selectorproducer,id=1
|     +: SelectorProducer@2e06a7e4=org.eclipse.jetty.io:context=HTTP/1.1@31dc339b,type=managedselector$selectorproducer,id=2
|     +: EatWhatYouKill@3e024667/SelectorProducer@4f4a0e4/PRODUCING/p=false/QueuedThreadPool[qtp1433867275]@5577140b{STARTED,10<=10<=250,i=0,q=1}[ReservedThreadExecutor@7791a895{s=0/16,p=0}][pc=0,pic=0,pec=0,epc=0]@2019-04-23T16:29:09.273-07:00=org.eclipse.jetty.util.thread.strategy:context=HTTP/1.1@31dc339b,type=eatwhatyoukill,id=0
|     +: EatWhatYouKill@6b928c79/SelectorProducer@29c3dd33/PRODUCING/p=false/QueuedThreadPool[qtp1433867275]@5577140b{STARTED,10<=10<=250,i=0,q=1}[ReservedThreadExecutor@7791a895{s=0/16,p=0}][pc=0,pic=0,pec=0,epc=0]@2019-04-23T16:29:09.273-07:00=org.eclipse.jetty.util.thread.strategy:context=HTTP/1.1@31dc339b,type=eatwhatyoukill,id=3
|     +: ScheduledExecutorScheduler@52d455b8{STARTED}=org.eclipse.jetty.util.thread:type=scheduledexecutorscheduler,id=0
|     +: org.eclipse.jetty.server.session.DefaultSessionCache@36bf28d2[evict=-1,removeUnloadable=false,saveOnCreate=false,saveOnInactiveEvict=false]=org.eclipse.jetty.server.session:context=glass,type=defaultsessioncache,id=0
|     +: All connections number is higher than the allowed maxConnection=org.eclipse.jetty.server:type=lowresourcemonitor$maxconnectionslowresourcecheck,id=0
|     +: SelectorProducer@29c3dd33=org.eclipse.jetty.io:context=HTTP/1.1@31dc339b,type=managedselector$selectorproducer,id=3
|     +: SelectorProducer@6e1134e1=org.eclipse.jetty.io:context=HTTP/1.1@31dc339b,type=managedselector$selectorproducer,id=7
|     +: ServletHandler@a0b8f79{STARTED}=org.eclipse.jetty.servlet:context=glass,type=servlethandler,id=0
|     +: EatWhatYouKill@57e9dd1f/SelectorProducer@6e1134e1/PRODUCING/p=false/QueuedThreadPool[qtp1433867275]@5577140b{STARTED,10<=10<=250,i=0,q=1}[ReservedThreadExecutor@7791a895{s=0/16,p=0}][pc=0,pic=0,pec=0,epc=0]@2019-04-23T16:29:09.273-07:00=org.eclipse.jetty.util.thread.strategy:context=HTTP/1.1@31dc339b,type=eatwhatyoukill,id=7
|     +: QueuedThreadPool[qtp1433867275]@5577140b{STARTED,10<=10<=250,i=0,q=1}[ReservedThreadExecutor@7791a895{s=0/16,p=0}]=org.eclipse.jetty.util.thread:type=queuedthreadpool,id=0
+- org.eclipse.jetty.util.log.Log@4e1d422d
+= ErrorHandler@71423665{STARTED} - STARTED
|  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
+- java.util.concurrent.ConcurrentHashMap@0{size=0}
+- java.util.concurrent.ConcurrentHashMap@0{size=0}
+- java.util.concurrent.ConcurrentHashMap@0{size=0}
+= DefaultSessionIdManager@3f29e26{STARTED}[worker=node0] - STARTED
|  +~ org.eclipse.jetty.jmx.MBeanContainer@55d56113
|  += HouseKeeper@507d20bb{STARTED}[interval=600000, ownscheduler=false] - STARTED
+> startJarLoader@28c97a5
|  +> URLs size=16
|  |  +> file:/usr/sumo/glass-20.1-13709/lib/javax.servlet-api-3.1.0.jar
|  |  +> file:/usr/sumo/glass-20.1-13709/lib/jetty-continuation-9.4.16.v20190411.jar
|  |  +> file:/usr/sumo/glass-20.1-13709/lib/jetty-deploy-9.4.16.v20190411.jar
|  |  +> file:/usr/sumo/glass-20.1-13709/lib/jetty-http-9.4.16.v20190411.jar
|  |  +> file:/usr/sumo/glass-20.1-13709/lib/jetty-io-9.4.16.v20190411.jar
|  |  +> file:/usr/sumo/glass-20.1-13709/lib/jetty-jmx-9.4.16.v20190411.jar
|  |  +> file:/usr/sumo/glass-20.1-13709/lib/jetty-security-9.4.16.v20190411.jar
|  |  +> file:/usr/sumo/glass-20.1-13709/lib/jetty-server-9.4.16.v20190411.jar
|  |  +> file:/usr/sumo/glass-20.1-13709/lib/jetty-servlet-9.4.16.v20190411.jar
|  |  +> file:/usr/sumo/glass-20.1-13709/lib/jetty-servlets-9.4.16.v20190411.jar
|  |  +> file:/usr/sumo/glass-20.1-13709/lib/jetty-start.jar
|  |  +> file:/usr/sumo/glass-20.1-13709/lib/jetty-start-9.4.16.v20190411-shaded.jar
|  |  +> file:/usr/sumo/glass-20.1-13709/lib/jetty-util-9.4.16.v20190411.jar
|  |  +> file:/usr/sumo/glass-20.1-13709/lib/jetty-webapp-9.4.16.v20190411.jar
|  |  +> file:/usr/sumo/glass-20.1-13709/lib/jetty-xml-9.4.16.v20190411.jar
|  |  +> file:/usr/sumo/glass-20.1-13709/lib/slf4j-api-1.8.0-beta2.jar
|  +> sun.misc.Launcher$AppClassLoader@18b4aac2
|     +> URLs size=2
|     |  +> file:/usr/sumo/glass-20.1-13709/lib/jetty-start.jar
|     |  +> file:/usr/sumo/jmxetric-1.0.7-alpha7-refresh/jmxetric-1.0.7-alpha7-refresh.jar
|     +> sun.misc.Launcher$ExtClassLoader@2415fc55
|        +> URLs size=10
|           +> file:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/ext/localedata.jar
|           +> file:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/ext/jaccess.jar
|           +> file:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/ext/cldrdata.jar
|           +> file:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/ext/sunec.jar
|           +> file:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/ext/sunjce_provider.jar
|           +> file:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/ext/zipfs.jar
|           +> file:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/ext/dnsns.jar
|           +> file:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/ext/icedtea-sound.jar
|           +> file:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/ext/sunpkcs11.jar
|           +> file:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/ext/nashorn.jar
+> AttributesMap@3391024
   +> java.util.concurrent.ConcurrentHashMap@5e521398{size=3}
      +@ org.eclipse.jetty.resources.cache=java.util.concurrent.ConcurrentHashMap@0{size=0}
      +@ org.eclipse.jetty.webFragments.cache=java.util.concurrent.ConcurrentHashMap@0{size=0}
      +@ org.eclipse.jetty.tlds.cache=java.util.concurrent.ConcurrentHashMap@0{size=0}
key: +- bean, += managed, +~ unmanaged, +? auto, +: iterable, +] array, +@ map, +> undefined

@joakime
Copy link
Contributor

joakime commented Apr 24, 2019

I've experienced something similar starting today as well. 9.4.14 -> 9.4.16 upgrade, Java 8, Ubuntu 18.04. I'll try to get a Server.dump() shortly. Working on proving my theory below and will take it if my theory proves true. :)

As far as I can tell, the lockup (in my case) is caused when ReservedThreadExecutor.capacity > QueuedThreadPool.minThreads. When my service runs with capacity=8 and minThreads=10, it works correctly. But when I set capacity=16, then I notice any thread mentioning ReservedThread.run disappears from thread dumps over a short time. (We choose reserved capacity by using the heuristic, in case that's relevant.)

Nice work, we'll see if we can replicate on our end!
So far, various load testing setups we use have not triggered this issue.

After doing some diving into code changes, I believe it is one of the changes introduced by one of these commits (which is new in 9.4.16): a982422 1097354 5177abb 9cda1ce

We were are also looking closer at those same commits.
They've undergone multiple reviews, from different developers, without issue.
But the one that has a few of us going "hmmm" is 1097354

In particular, in 9.4.14 (and 9.4.15) we only shrink QTP compared to maxThreads, not compared to minThreads. This was changed in 1097354#diff-2572dae5bd06448110dfdfe5038ca01dL785. My gut tells me that minThreads is actually closer to the right logic, but I think it's producing incorrect behavior by removing reserved threads. Maybe it should be < minThreads + reservedThreads? (In my setup, maxThreads is 250, so it's unlikely to have ever stopped the threads before.)

Interesting observation, we'll dig into this with some fervor.

@cddude229
Copy link

cddude229 commented Apr 24, 2019

Ok, a bit more info: My first "add more" attempt was actually NOT by changing jetty.xml config, but the same way we originally discovered it - differently sized clusters. When I switched from AWS's c3.2xlarge to c3.4xlarge it breaks. However, adjusting -1 to "16" in the config below does NOT break it, although it does appear to increase ReservedThreadExecutor.capacity to 16 still, so the change worked.

Config:

      <Set name="minThreads" type="int">
        <Property name="jetty.threadPool.minThreads" deprecated="threads.min" default="10"/>
      </Set>
      <Set name="maxThreads" type="int">
        <Property name="jetty.threadPool.maxThreads" deprecated="threads.max" default="250"/>
      </Set>
      <Set name="reservedThreads" type="int">
        <Property name="jetty.threadPool.reservedThreads" default="16"/>
      </Set>
      <Set name="idleTimeout" type="int">
        <Property name="jetty.threadPool.idleTimeout" deprecated="threads.timeout" default="30000"/>
      </Set>
      <Set name="detailedDump" type="boolean">
        <Property name="jetty.threadPool.detailedDump" default="false"/>
      </Set>
      <Set name="stopTimeout" type="long">30000</Set>
      <Set name="daemon" type="boolean">true</Set>

There's definitely continuous variation in the QTP idle threads, compared to before the change:
image

So that tells me they're coming and going (29:50 is about when I switched from -1 to 16.) We had to kill JUL logging because it doesn't play nice with log4j2 in our setup, so I can't get something exact there. I'll see if I can temporarily re-enable that.

@cddude229
Copy link

Additional experiments run + results (all on the "broken" instance type - c3.4xlarge)

  1. Set -1 to 8 (previous heuristic determined value) and saw it still break.
  2. Ran with all four commits (equiv of 9.4.15) I mentioned reverted and saw it work w/ -1.
  3. Ran with just 1097354. and a982422 reverted. Saw it break. This tells me the bug is in 5177abb or 9cda1ce.
  4. Re-reverted 9cda1ce (so only 5177abb is applied) and it worked.

So my conclusion says that my bug (which is similar to the original poster's and may or may not be the same) is attributed to 9cda1ce and not 1097354. So my theory about maxThreads appear wrong.

P.S. If my bug proves different from OP's, apologies for thread hijacking. It just seemed so similar that it felt right to include the info here.

@gregw
Copy link
Contributor

gregw commented Apr 24, 2019

One thing I have noticed is that our changes have also broken the QueuedThreadPool dump. So while others are working on a reproduction of the issue, I've created branch jetty-9.4.x-3550-QueuedThreadPool-stalled and I will fix the dump, so we can have a better idea of what is going on.

gregw added a commit that referenced this issue Apr 24, 2019
Fixed QueuedThreadPool dump of known threads

Signed-off-by: Greg Wilkins <gregw@webtide.com>
lachlan-roberts added a commit to lachlan-roberts/jetty.project that referenced this issue Apr 24, 2019
…hreads

previously if there were no idle threads, QueuedThreadPool.execute()
would just queue the job and not start a new thread to run it

Signed-off-by: lachan-roberts <lachlan@webtide.com>
gregw pushed a commit that referenced this issue Apr 24, 2019
previously if there were no idle threads, QueuedThreadPool.execute()
would just queue the job and not start a new thread to run it

Signed-off-by: lachan-roberts <lachlan@webtide.com>
gregw added a commit that referenced this issue Apr 24, 2019
Ensure that new threads are started if a thread exits due to failure.

Signed-off-by: Greg Wilkins <gregw@webtide.com>
@gregw
Copy link
Contributor

gregw commented Apr 24, 2019

We believe that we have found the reason for the failure.

The QTP as modified in 9.4.16, allowed the number of idle threads to shrink to 0 if it does not violate minThreads requirement. When executing a new thread (or failing an existing thread), we need now need to check if a new thread should be started as there may not be an idle thread looking for queued tasks.

Can you please look at the branch referenced in PR #3586 and test that it fixes your problem, if so then we will get a new release out ASAP!

@gregw gregw self-assigned this Apr 24, 2019
@gregw gregw added Bug For general bugs on Jetty side High Priority labels Apr 24, 2019
gregw added a commit that referenced this issue May 8, 2019
Signed-off-by: Greg Wilkins <gregw@webtide.com>
gregw added a commit that referenced this issue May 17, 2019
I have modified the JMH benchmark to look at latency (by measuring throughput) for executing a job with low, medium and high concurrency.     It looks like the JREs threadpool is not so bad in some load ranges, but once things get busy we are still a bit better.   No significant difference is seen between previous QTP impl and the one in this PR.

```

Benchmark                     (size)  (type)   Mode  Cnt       Score        Error  Units
ThreadPoolBenchmark.testFew      200     ETP  thrpt    3  129113.271 ±  10821.235  ops/s
ThreadPoolBenchmark.testFew      200     QTP  thrpt    3  122970.794 ±   8702.327  ops/s
ThreadPoolBenchmark.testFew      200     QTP+ thrpt    3  121408.662 ±  12420.318  ops/s

ThreadPoolBenchmark.testSome     200     ETP  thrpt    3  277400.574 ±  34433.710  ops/s
ThreadPoolBenchmark.testSome     200     QTP  thrpt    3  244056.673 ±  60118.319  ops/s
ThreadPoolBenchmark.testSome     200     QTP+ thrpt    3  240686.913 ±  43104.941  ops/s

ThreadPoolBenchmark.testMany     200     ETP  thrpt    3  679336.308 ± 157389.044  ops/s
ThreadPoolBenchmark.testMany     200     QTP  thrpt    3  704502.083 ±  15624.325  ops/s
ThreadPoolBenchmark.testMany     200     QTP+ thrpt    3  708220.737 ±   3254.264  ops/s

```

Signed-off-by: Greg Wilkins <gregw@webtide.com>
sbordet added a commit that referenced this issue May 22, 2019
Improved code formatting.
Removed unnecessary code in doStop().
Now explicitly checking whether idleTimeout > 0 before shrinking.

Signed-off-by: Simone Bordet <simone.bordet@gmail.com>
@gregw
Copy link
Contributor

gregw commented May 24, 2019

A problem has been found in the 3550 cleanup, where a race in idle thread vs job queue size results in a thread not being started when one is needed.

@gregw gregw reopened this May 24, 2019
gregw added a commit that referenced this issue May 27, 2019
minor cleanups

Signed-off-by: Greg Wilkins <gregw@webtide.com>
gregw added a commit that referenced this issue May 27, 2019
don't exit Runner on exceptions

Signed-off-by: Greg Wilkins <gregw@webtide.com>
gregw added a commit that referenced this issue May 27, 2019
cleanup after pair programming with sbordet

Signed-off-by: Greg Wilkins <gregw@webtide.com>
gregw added a commit that referenced this issue May 27, 2019
cleanups and more comments

Signed-off-by: Greg Wilkins <gregw@webtide.com>
gregw added a commit that referenced this issue May 27, 2019
removed flaky test

Signed-off-by: Greg Wilkins <gregw@webtide.com>
gregw added a commit that referenced this issue May 27, 2019
longer benchmark runs

Signed-off-by: Greg Wilkins <gregw@webtide.com>
gregw added a commit that referenced this issue May 28, 2019
optimized by removing need to check running

Signed-off-by: Greg Wilkins <gregw@webtide.com>
gregw added a commit that referenced this issue May 28, 2019
minor cleanups

Signed-off-by: Greg Wilkins <gregw@webtide.com>
gregw added a commit that referenced this issue May 29, 2019
updates from review

Signed-off-by: Greg Wilkins <gregw@webtide.com>
gregw added a commit that referenced this issue May 29, 2019
updates from review

Signed-off-by: Greg Wilkins <gregw@webtide.com>
@joakime
Copy link
Contributor

joakime commented Jun 20, 2019

@gregw is this fixed? You reopened this and created PR #3694 which is now merged in commit 0c61ec3, which is available in release 9.4.19.v20190610

@gregw
Copy link
Contributor

gregw commented Jun 21, 2019

I believe so.

@gregw gregw closed this as completed Jun 21, 2019
dmichel1 added a commit to spotify/heroic that referenced this issue Nov 13, 2019
The version we are on could cause the server to become unresponsive after sitting idle from a load spike

jetty/jetty.project#3550

https://github.com/eclipse/jetty.project/releases/tag/jetty-9.4.18.v20190429
dmichel1 added a commit to spotify/heroic that referenced this issue Nov 13, 2019
The version we are on could cause the server to become unresponsive after sitting idle from a load spike

jetty/jetty.project#3550

https://github.com/eclipse/jetty.project/releases/tag/jetty-9.4.18.v20190429
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug For general bugs on Jetty side High Priority Sponsored This issue affects a user with a commercial support agreement
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants