-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Getting random too many CLOSE_WAIT states on heavy load #1473
Comments
Where do you get the CLOSE_WAITs, on the server or on the client ? |
We are getting it on the server. Basically server isn't responding when too many CLOSE_WAITs. By executing below command we can get the total number of CLOSE_WAITs:
|
What is your configuration for the When the Can you take a network dump with Wireshark that shows the problem (client closing, but the server not closing), and attach it here ? |
@bhaveshmaniya point being we cannot reproduce this. So we need a reproducible case from you, or a network dump that shows the problem, or in general further details on what is going on. Also, are you sure that the CLOSE_WAITs are related to Jetty ? Do you have, in your server application, a client that connects to somewhere else that may generate the CLOSE_WAITs ? |
The
Yes, we can see it reducing like 400 to 380, 350, etc... Here I've attached the network dumps taken using Wireshark when CLOSE_WAITs goes around 800 and server taking too long time to respond(isn't responding).
I am not sure about it. Basically we've also deployed the same application on CentOS-7(other softwares are same), where also we are facing the same issue.
No |
On what port is your server listening in the |
The server is listening on port |
Had a look at the The server also appear to send back either random, or gzipped or encrypted data, so there is no way to figure out if the framing is ok. You also appear to be using Apache Bench, probably for some load testing. CLOSE_WAIT is caused by the client sending a TCP FIN, and the server not responding to that. I need a I would start by tuning down the load testing to a rate that is very light, and see if I still get the CLOSE_WAITs. If not, raise the rate but keep an eye on the client to detect when it is maxing out (which will happen way before the server does). |
How sure you are that this is not caused by the JDBC driver to MySQL ? Can you actually pinpoint that the CLOSE_WAIT sockets belong to Jetty, by looking at the ports ? |
Hi @sbordet, Regarding Apache Bench use, basically we would like to increase the load on server as we are facing CLOSE_WAIT issue when too many simultaneous request comes on server. I've taken CLOSE_WAIT status by executing Also when CLOSE_WAITs increase and server taking too long time to response, we are getting 'java.lang.OutOfMemoryError: GC overhead limit exceeded' exception, refer attached 'jetty_log.txt' file for the same. Let me know if you need more information. |
@bhaveshmaniya we don't see this in our load tests, so it must be something peculiar with your setup. If you run with a very light load, do you see the problem ? |
@sbordet you might be correct, there should be something peculiar with setup, I'll check it again and try to get pcap generated from normal request rather then from Apache Bench. With light load, we can see the problem, it's working perfect. Thank you! |
No update. closing as invalid. |
@bhaveshmaniya are you using apache's CXF library anywhere?.. if yes then you can try changing the log level form ERROR to INFO, which might solve the problem of CLOSE_WAIT socket count... worked for me . |
Hi Guys, |
I would like this case to be reopened because this is something replicable. |
@caimite can you open a new issue with:
We do have these occasional reports of this problem, but unless we can reproduce ourselves we cannot debug. |
@gregw thanks for replying to my post. |
I have encountered the same problem. The Jetty version is 9.4.16. it's ok to connect to jetty server just after starting the server. And failed to connect to Jetty after the server running for about twenty minutes.After analyzing the network message, we found that the server did not return "server hello" after client sending "client hello".By chance, we found that the server has both external IP and 127.0.0.1. So, we tried to remove the monitor 127.0.01, and finally found that the problem no longer appeared. |
@whuxiari please open a new issue. |
Jetty is embeded, os is suse12Sp2. beside Jetty, we integrated with jersey,hk2... |
we found new phenomenon,only if the machine have multiple IP, only if we monitoring two or more ips, the problem will appear. |
This issue has been automatically marked as stale because it has been a full year without activity. It will be closed if no further activity occurs. Thank you for your contributions. |
We have developed Jersey web application(REST APIs) and below are the details about used libraries/technologies:
Basically we are getting too many CLOSE_WAIT issue randomly. We’ve tried to figure out the solution and below are the list of references we’ve tried.
As suggested in above references we’ve updated /etc/sysctl.conf, /opt/jetty9/etc/jetty.xml and /etc/security/limits.conf files with below details:
/etc/sysctl.conf
Then executes
/opt/jetty9/etc/jetty.xml
/etc/security/limits.conf
We’ve also gone through the code optimization steps, still no luck. I have gone through fundamental of TCP connection states, and cause of CLOSE_WAIT state then tried above ways as mentioned, also gone through many questions related to CLOSE_WAIT on stackoverflow and tried to resolve the issue as people mentioned solutions over there but didn’t get any success.
Can anyone face the same issue and found any solution?
The text was updated successfully, but these errors were encountered: