-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Timeout waiting for connection from pool #269
Comments
Hi @dehora,
Seems possible that the number of incoming requests exceeded the number of requests that could be completed (during the high server-side latency period) therefore leading to the connection pool exhaustion. A thread dump during the problematic time period could probably help confirm this. You may also consider changing the timeout configuration to some lower values so that the application could fail-fast in face of transient high latency. |
@hansonchar The timeouts were the default for 1.8.3. There was nothing unusual in terms of request load at the time (we didn't seem to trigger any limitations on ddb). Fwiw, I've had a look at the current master and can't see any obvious places where connections don't get closed. You may also consider changing the timeout configuration to some lower values so that the application could fail-fast in face of transient high latency. Thanks for the timeout pointers, and yep, we've wrapped the SDK calls with hystrix to introduce circuit breaking. |
Hi @dehora, as you probably already know, the socket connection timeout and read timeout can be configured at the client level via ClientConfiguration: http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/ClientConfiguration.html |
This is happening to us as well. The symptom is the exception from apache that there are no connections in the pool. Since AWS sdk encapsulates all connection management, this seems like an AWS Java sdk bug. can someone comment on the plan for fixing it? |
Hi @ferozed, can you provide more specific details, such as the version of AWS SDK for Java you are using, stack trace, client configuration, etc. ? |
Here is the stack:
Here is the client configuration
We use sdk version 1.4.1 |
@ferozed Are you perhaps using the streaming API to read S3 objects but forget to close the S3Object after you consume the content? |
Yep, that is the reason. Thanks for pointing it out. |
hi! We fixed this bug on our side, but it is happening again. Our java process does calls to SQS and S3 at the same time. Is it possible that both AmazonSQSClient and AmazonS3Client are sharing the same underlying httpclient, and it is not the S3client, but the sqs client that is leaking? Yesterday we had this issue surface again. I have grepped the logs for all s3 client errors and put them into a logfile. Please have a look at this and let me know if you see something that stands out. here is a snippet of my code....
|
Looking at the docs, I see that S3Object also implements Closeable interface. Does this object also need to be closed on error? Could that be contributing to the connection leak? |
Hope this helps. |
getDestinationPath() just concatenates strings. There is very low chance of that failing. |
What if first statement of your function writeS3ObjectToFile() i.e. following fails: FileOutputStream outputStream = new FileOutputStream(outputFile); s3Object should be closed in saveOriginalImage() function. |
@agargi is correct - if you ever have a reference to an S3Object, you MUST close it, otherwise you're potentially leaking an HTTP connection. Also, if you're just writing the object contents to disk, then you can use TransferManager, or one of the over versions of getObject in AmazonS3Client that take in a file and will perform the file write for you. That would prevent you from ever having to manage/close the S3Object streams. |
This happening to me, I am only using the |
@ilaipi I'm not sure I understand the question. Can you elaborate? |
@shorea
in my project for the request to wechat service. My code is:
without close the After I put my project online a few days, a week almost, I got the error:
I have edited my code and published:
I will wait for few days to see the effect. Thanks. |
Okay gotcha. Yeah closing the connection applies when using the raw apache client as well. |
how to release httpClient . I have used httpClient(4.5.2). here is my code below |
FYI: I found that I was able to work around getting this error by using AmazonS3Client.getObjectAsString method, which has a finally block with a close on it, as opposed to using AmazoneS3Client.getObject , which doesn't seem to have the same closing feature.
|
I am facing a similar issue for dynamo db read operation.
As per cloudwatch logs for the DDB read call, there was a spike at that time in consumed read capacity to 326K for more than 5 datapoints in 5 min |
Hi @ilaipi in my project have the same problem .it looks like more TCP status is CLOSE_WAIT , could you give us some advise to fix it? |
Timeout waiting for connection from pool
A check on connections on a server showed a lot of close waits -
which is often indicative of an apache client request not having a close() called. Checking cloudwatch, there was a corresponding latency increase in DDB around the time we saw this (up to 8s).
There wasn't a change in how we call DynamoDB at the time. I am wondering if there is an issue in the client if/when the server latencies are high such that connections don't get closed?
This happened on 1.8.3. We're going to upgrade to 1.8.9.1, but it's hard to tell if related code was changed, the release diffs are too big to review easily :)
The text was updated successfully, but these errors were encountered: