Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RequestTimeout: Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed. #281

Closed
doronin opened this issue May 21, 2014 · 29 comments
Labels
feature-request A feature should be added or improved.

Comments

@doronin
Copy link

doronin commented May 21, 2014

I'm getting this error once in a while when making a putObject call.

What is the root cause of the error and what's the best way to avoid it?
According to API response looks like this error is non-retryable. Is there a way to configure aws-sdk to make a retry when this type of error happens?

example response:
{"message":"Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.","code":"RequestTimeout","time":"2014-05-21T00:50:25.709Z","statusCode":400,"retryable":false,"_willRetry":false}

@lsegal
Copy link
Contributor

lsegal commented May 21, 2014

We've had this issue reported against other SDKs, so you can read through the other issues (below) to get a sense of what the root cause is, but generally speaking this happens when your provided Content-Length is larger than the number of bytes sent, causing S3 to wait for you to send a specified number of bytes then timeout while waiting.

In other SDKs this usually manifests itself as a retry issue-- the SDKs will attempt to retry an unrelated error but forget to rewind the stream, causing fewer bytes to be sent the second time. In Node.js, streams are not rewindable, so we cannot rewind and retry these operations.

Can you provide more information on how you are sending the object? Are you using a stream, by any chance?

We could start retrying these errors to alleviate some of the cases where this occurs, but we could not do it for streams, since they are not rewindable.

As a workaround, you can also wrap your upload code in a block that checks for err.code === 'RequestTimeout' and retry manually.

Related issues:

aws/aws-sdk-php#29
aws/aws-cli#401
aws/aws-sdk-ruby#241

@doronin
Copy link
Author

doronin commented May 21, 2014

Hi Loren. We send the object with a string in body and no contentLength specified

s3.putObject({
key: 'some string',
bucket: 'some string',
body: 'some string'
})

and in most cases it seems that SDK calculates contentLength fine and there are no problems getting response.

Is there some way to hint SDK that we do not use stream and RequestTimeout could be retryable? If not, I guess our best bet is to use a wrapper around s3.putObject calls that handles retry logic. I was primarily holding on using custom retry strategy because there is a native one in aws-sdk.

@lsegal
Copy link
Contributor

lsegal commented May 21, 2014

It's possible that the timeout error is legitimate (data corruption) and the SDK should retry, but we would not be able to do it for streams. In your case, retry behavior should work. I agree that the SDK should support out-of-box retries for your scenario.

@lsegal
Copy link
Contributor

lsegal commented Jun 18, 2014

I just added a change to retry these errors (at a maximum number of times configured through maxRetries) so you don't have to. This will make it into the next release.

@doronin
Copy link
Author

doronin commented Jun 18, 2014

Sweet. Thank you!

@bdeitte
Copy link

bdeitte commented Jun 19, 2014

I've run into this a bit myself and was really happy to see this fix get in, thanks. Eagerly awaiting the next version!

@fabwu
Copy link

fabwu commented Jun 15, 2015

I use a stream to upload a war file to s3 and set the content length with the file size. On slow internet connections (1 Mbps upload) I get this error:

Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.

I am using version 2.1.33 so the workaround doesn't work for me.

@lsegal Could you explain me a fix for a stream?

@mig82
Copy link

mig82 commented Aug 20, 2015

Hi,
I'm using aws-sdk for Node.js and I ran into the same issue. I seem to have found what the problem is. This is what I was originally doing:

form.on('part', function(part) {
    var opts = {
        Bucket: myBucket,
        Key: myDestPath,
        ACL: 'public-read',
        Body: part,
        ContentLength: part.byteCount,
    };

    s3.putObject(opts, function(err, data) {
        if (err) throw err;
        console.log("File upload completed!", data);
        res.writeHead(200, {'content-type': 'text/plain'});
        res.end('Ok');
    });
});

I realized the console.log message was being printed three times instead of one. Two of them for a couple of fields in the form from which I do the upload. Fields are parts as well as files. So then I tried this:

form.on('part', function(part) {
    if (part.filename) {
        console.log("Received a part %o", part);
        var opts = {
            Bucket: myBucket,
            Key: myDestPath,
            ACL: 'public-read',
            Body: part,
            ContentLength: part.byteCount,
        };

        s3.putObject(opts, function(err, data) {
            if (err) throw err;
            console.log("File upload completed!", data);
            res.writeHead(200, {'content-type': 'text/plain'});
            res.end('Ok');
        });
    }
});

The new if(part.filename) statement ensures I only attempt the upload for the file and not for the other fields in the form. The timeout error never occurred again.
I hope this helps.

@armw4
Copy link

armw4 commented Mar 8, 2016

as mentioned, turned out to be a slightly off Content-Length header for me. if you don't pass it, it'll just read the stream through to the end (and work fine).

@PeppeL-G
Copy link

PeppeL-G commented Oct 1, 2016

I don't have time to debug this, but I got the error as well. I uploaded a file from a website to my server (Node) using Ajax, and then uploaded it from my serverto S3 with code looking like this:

var stream = fs.createReadStream(pathToFile)

s3.putObject({
    Bucket: bucketName,
    Key: key,
    Body: stream
}, function(err, data){ /* Do stuff... */ })

The funny thing is that it always fails for the first Ajax request I send, but it works for all that comes after. I don't know if this is a bug in the AWS SDK or the package multer (with express) I use to receive the uploaded file on my server (although I receive the file perfectly well as far as I can tell; it's saved on my server).

However, when I change my code to:

fs.readFile(pathToFile, function(err, data){
    if(err){
        // Handle fail...
    }else{
        s3.putObject({
            Bucket: bucketName,
            Key: key,
            Body: data
        }, function(err, data){ /* Do stuff... */ })
    }
})

It works for all my Ajax request.

I might have done a mistake somewhere (I'm not an experience Node programmer), but hopefully this will help someone.

@vsmori
Copy link

vsmori commented Jun 8, 2017

Anybody fixed this?

2 similar comments
@felipemarques
Copy link

Anybody fixed this?

@felipeteodoro
Copy link

Anybody fixed this?

@jeskew
Copy link
Contributor

jeskew commented Jul 7, 2017

As mentioned above, the SDK is not able to automatically retry uploads of streams. S3 will automatically close connections that are idle for too long, and when uploading streaming data you will need to react to this error by re-initializing your input stream and retrying the PutObject operation.

@PeppeL-G
Copy link

PeppeL-G commented Jul 8, 2017

@jeskew, there shouldn't be any need for a retry if it would work as it should, right? Something is wrong and needs to be fixed? Or am I missing something?

@jeskew
Copy link
Contributor

jeskew commented Jul 8, 2017

@PeppeL-G If a socket remains idle for too long a period of time, then S3 will close the connection and the operation will need to be retried. Common causes of socket timeouts include slow source streams, saturated event loops, and HTTP agents juggling too many concurrent connections.

@fikriauliya
Copy link

I encountered this issue on the latest npm version: 2.82.0
It works ok on 2.63.0

I have re-tested this on both version so many times, very less likely the root issue is somewhere else

@fikriauliya
Copy link

Now I have pinned point the issue on 2.68.0
<=2.67.0 have no problems.

Further investigation:

The release change on 2.68.0 is:

feature: S3: Switches S3 to use signatureVersion "v4" by default. To continue using signatureVersion "v2", set the signatureVersion: "v2" option in the S3 service client configuration. Presigned URLs will continue using "v2" by default.

Setting signatureVersion to v2 solve this problem:
const s3 = new AWS.S3({signatureVersion: 'v2'});

The followings don't work:
const s3 = new AWS.S3(); => RequestTimeout: Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.
const s3 = new AWS.S3({signatureVersion:'v4'); => RequestTimeout: Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.
const s3 = new AWS.S3({signatureVersion:'v3'); => AccessDenied: Access Denied

@subugupta
Copy link

I had the same issue. changing the signatureVersion to 'v2' fixed the problem for me.

@syberkitten
Copy link

Have the same issue, But when i try signature v2:
I get:
The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256

@syberkitten
Copy link

It's quite unbelievable that such commonly used features
are faulty after so many years!!! 👎

Is there any work around?

I can only use putObject
as the upload method upload 0 byte files (corrupt)

But putObject timeouts if the file is larger then 100 kb.

Is this useless?

@hamxabaig
Copy link

Okay the funny thing is i was trying it with Postman and i didn't put multipart/form-data in header. Silly me.

@chrisbeyer
Copy link

Setting signatureVersion to v2 solve this problem:
const s3 = new AWS.S3({signatureVersion: 'v2'});

Thanks fikriauliya, this worked for me :)

@syberkitten
Copy link

syberkitten commented Oct 23, 2017

If i may suggest, instead of breaking our teeth ...

this is a very stable and good library utilizing the amazon s3 multipart upload
mechanism, with low memory footprint, chunks, and allows you to upload GB's of data.

works like charm even for files that take more then 1 hour of upload, and can resume
broken uploads.

https://github.com/nathanpeck/s3-upload-stream

@paulduthoit
Copy link

Just use s3.upload instead of s3.putObject method. ;)

@mudivili
Copy link

mudivili commented Feb 9, 2018

@paulduthoit Worked! Thanks!

@cortezcristian
Copy link

cortezcristian commented Jul 31, 2018

Difference between upload() and putObject() for uploading a file to S3?
https://stackoverflow.com/questions/38442512/difference-between-upload-and-putobject-for-uploading-a-file-to-s3

@Ishank-dubey
Copy link

Ishank-dubey commented Aug 12, 2018

It doesn't work still?

@lock
Copy link

lock bot commented Sep 29, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs and link to relevant comments in this thread.

@lock lock bot locked as resolved and limited conversation to collaborators Sep 29, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
feature-request A feature should be added or improved.
Projects
None yet
Development

No branches or pull requests