New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What does this mean Thread Starvation or clock leap detected #679
Comments
|
Nothing to worry about. Looks like you are running on a laptop, and it went to sleep. Nothing to fix configuration-wise. |
|
I also noticed this several times on server systems which definitely didn't went to sleep:
|
|
2m36s starvation is significant. If the system did not enter sleep, and you are sure of that, there are only two possibilities that I can see. One, this is a virtual machine that is configured to synchronize its clock from the host. In which case, the solution is to run ntpd (Linux) or w32time (windows) in the VM. Two, there was actually a 2+ minute starvation event. In which case, you need to monitor the CPU and figure out what is creating the load. |
|
Today I saw this on a developer windows machine (no VM). After 50 minutes of executing that statement, the above mentioned warning appeared. |
|
I also have the same problem. After running process for a while I am getting this |
|
Virtual machine? Laptop? |
|
It's in the laptop. Reason I put in here is as patric-r said. I am trying to index my relational database with elasticsearch, since it takes sometime around 50min mark I start getting this warning. Application services stops responding! You think it's problem due to my configuration Brett! Till 40 -4 5 min Application runs perfectly fine. |
|
The CPU is going into deep sleep mode. Turn off power saving/sleep during the operation. |
|
I also have the same problem. After running process for a while I am getting this payprod-core-hikari-pool - Thread starvation or clock leap detected (housekeeper delta=3h36m56s61ms486µs9ns). So I can't get a Connection, it throws * org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is java.sql.SQLTransientConnectionException: payprod-core-hikari-pool - Connection is not available, request timed out after 935119ms*. How should I solve it? |
|
Hi.. You must be doing a stateful transaction. This is how I debugged it.
Hope this helps. This is how I fixed it. |
|
3h36m56s61ms?! Your computer must be going into sleep mode. There is no way a thread in a normally running system could be starved for so long. Either that or something is messing with the system clock (unlikely). |
|
I found this issue running Tomcat servers on a virtual Kubernetes cluster. I was getting status 80 connection refused from all the endpoints, and intermittently the services would briefly work. This thread's message was the only thing out of the ordinary I could find in the logs. My theory is that the machine did not have enough resources for the threads because when I did not start all of the services in the cluster I had no issue. Therefore, it may be a lack of resource issue. |
|
I concur with @Coroecram. We just encountered these messages with java.lang.OutOfMemoryError. |
|
@Coroecram what is the magnitude of your reported delta, and did you find any cause or solution? We're seeing the same issue in a similar setup (45s delta). |
|
I also encountered the same warning, but it has no effect on my scheduled task. The only problem is that I found that my scheduled task was retriggered during this time period.
|
|
In my case I got this error Initially and then heap overflow. I increase the Heap and Stack size and it got away. But in my case, I run a class that overflow the stock heap size. |
|
@milosonator Sorry, this has been so long ago I really don't know the exact conditions. |
|
|
|
Having the same issue in an AWS ECS container. Couldn't be due to sleep. Previously the CPU resources were highly utilized(90-100%). Even after providing enough resources, it gives the same error. |
|
The number one cause of this issue is excessive garbage collection. If you are seeing starvation logs indicating delays of more than a few tens of seconds it is unlikely to be caused by excessive CPU -- though that is still a possibility. I highly recommend that you enable GC (garbage collection) logging. GC logging will show garbage collection times as well has memory statistics such has how much memory was available before and after the GC. When the memory gets extremely low in the JVM, the JVM will spend all of its time in GC; pausing application threads as it does so. See this page: https://www.baeldung.com/java-verbose-gc (Note: the example JVM arguments on that page use the If you are running on Java 11 or greater on Linux then I strongly recommend looking into the new ZGC collector: https://www.baeldung.com/jvm-zgc-garbage-collector (enabled by the But, I recommend doing so after fully investing your current GC configuration with the logging referenced above. |
Hi,
I am trying out HikariCP pooling. I am using Tomcat localhost and my config is direct copy from the docs
pom.xml
applicationContext.xml
WARN - 1h35m13s114ms - Thread starvation or clock leap detected (housekeeper delta=springHikariCP).
What does this mean and how to fix?
The text was updated successfully, but these errors were encountered: