New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RequestTimeout: Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed. #281
Comments
We've had this issue reported against other SDKs, so you can read through the other issues (below) to get a sense of what the root cause is, but generally speaking this happens when your provided Content-Length is larger than the number of bytes sent, causing S3 to wait for you to send a specified number of bytes then timeout while waiting. In other SDKs this usually manifests itself as a retry issue-- the SDKs will attempt to retry an unrelated error but forget to rewind the stream, causing fewer bytes to be sent the second time. In Node.js, streams are not rewindable, so we cannot rewind and retry these operations. Can you provide more information on how you are sending the object? Are you using a stream, by any chance? We could start retrying these errors to alleviate some of the cases where this occurs, but we could not do it for streams, since they are not rewindable. As a workaround, you can also wrap your upload code in a block that checks for Related issues: |
Hi Loren. We send the object with a string in body and no contentLength specified s3.putObject({ and in most cases it seems that SDK calculates contentLength fine and there are no problems getting response. Is there some way to hint SDK that we do not use stream and |
It's possible that the timeout error is legitimate (data corruption) and the SDK should retry, but we would not be able to do it for streams. In your case, retry behavior should work. I agree that the SDK should support out-of-box retries for your scenario. |
I just added a change to retry these errors (at a maximum number of times configured through maxRetries) so you don't have to. This will make it into the next release. |
Sweet. Thank you! |
I've run into this a bit myself and was really happy to see this fix get in, thanks. Eagerly awaiting the next version! |
I use a stream to upload a war file to s3 and set the content length with the file size. On slow internet connections (1 Mbps upload) I get this error:
I am using version 2.1.33 so the workaround doesn't work for me. @lsegal Could you explain me a fix for a stream? |
Hi,
I realized the console.log message was being printed three times instead of one. Two of them for a couple of fields in the form from which I do the upload. Fields are parts as well as files. So then I tried this:
The new if(part.filename) statement ensures I only attempt the upload for the file and not for the other fields in the form. The timeout error never occurred again. |
as mentioned, turned out to be a slightly off |
I don't have time to debug this, but I got the error as well. I uploaded a file from a website to my server (Node) using Ajax, and then uploaded it from my serverto S3 with code looking like this: var stream = fs.createReadStream(pathToFile)
s3.putObject({
Bucket: bucketName,
Key: key,
Body: stream
}, function(err, data){ /* Do stuff... */ }) The funny thing is that it always fails for the first Ajax request I send, but it works for all that comes after. I don't know if this is a bug in the AWS SDK or the package However, when I change my code to: fs.readFile(pathToFile, function(err, data){
if(err){
// Handle fail...
}else{
s3.putObject({
Bucket: bucketName,
Key: key,
Body: data
}, function(err, data){ /* Do stuff... */ })
}
}) It works for all my Ajax request. I might have done a mistake somewhere (I'm not an experience Node programmer), but hopefully this will help someone. |
Anybody fixed this? |
2 similar comments
Anybody fixed this? |
Anybody fixed this? |
As mentioned above, the SDK is not able to automatically retry uploads of streams. S3 will automatically close connections that are idle for too long, and when uploading streaming data you will need to react to this error by re-initializing your input stream and retrying the |
@jeskew, there shouldn't be any need for a retry if it would work as it should, right? Something is wrong and needs to be fixed? Or am I missing something? |
@PeppeL-G If a socket remains idle for too long a period of time, then S3 will close the connection and the operation will need to be retried. Common causes of socket timeouts include slow source streams, saturated event loops, and HTTP agents juggling too many concurrent connections. |
I encountered this issue on the latest npm version: I have re-tested this on both version so many times, very less likely the root issue is somewhere else |
Now I have pinned point the issue on Further investigation: The release change on
Setting The followings don't work: |
I had the same issue. changing the signatureVersion to 'v2' fixed the problem for me. |
Have the same issue, But when i try signature v2: |
It's quite unbelievable that such commonly used features Is there any work around? I can only use putObject But putObject timeouts if the file is larger then 100 kb. Is this useless? |
Okay the funny thing is i was trying it with Postman and i didn't put |
Thanks fikriauliya, this worked for me :) |
If i may suggest, instead of breaking our teeth ... this is a very stable and good library utilizing the amazon s3 multipart upload works like charm even for files that take more then 1 hour of upload, and can resume |
Just use |
@paulduthoit Worked! Thanks! |
Difference between upload() and putObject() for uploading a file to S3? |
It doesn't work still? |
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs and link to relevant comments in this thread. |
I'm getting this error once in a while when making a
putObject
call.What is the root cause of the error and what's the best way to avoid it?
According to API response looks like this error is non-retryable. Is there a way to configure
aws-sdk
to make a retry when this type of error happens?example response:
{"message":"Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.","code":"RequestTimeout","time":"2014-05-21T00:50:25.709Z","statusCode":400,"retryable":false,"_willRetry":false}
The text was updated successfully, but these errors were encountered: