Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Timeout error on large file upload #877

Closed
papayaglobal opened this issue Jan 17, 2016 · 6 comments
Closed

Timeout error on large file upload #877

papayaglobal opened this issue Jan 17, 2016 · 6 comments

Comments

@papayaglobal
Copy link

Hi,

I'm trying to upload a large file (68MB) and getting the following error:

[TimeoutError: Connection timed out after 120000ms]
     message: 'Connection timed out after 120000ms',
     code: 'NetworkingError',
     time: Sun Jan 17 2016 12:20:31 GMT+0200 (IST),
     region: 'us-east-1',
     hostname: 'myBucket.s3.amazonaws.com',
     retryable: true 

When I tried to upload a small file (several KB) all worked well.

This is the code that handles the upload (typescript):

let s3 = new AWS.S3();
let body = fs.createReadStream(this.distPath);
let params = {Bucket: "myBucket", Key: "myKey", Body: body};
let upload: any = this.s3["upload"];
upload.call(s3, params)
.on("httpUploadProgress", (progress) => {
     console.log(progress);
})
.send( (err, data) => {
      if (err) {
          console.log(err);
      }
});

The progress prints to the console, but when it reaches the end it hangs and eventually times out.

Node version: v5.4.0
aws-sdk version: 2.2.30

@LiuJoyceC
Copy link
Contributor

Hi @oherman1
Thanks for providing this information.
Could you provide more of the exact code that you're running? Since you've referenced 'this' twice in your code snippet, it will helpful to see the object that 'this' is referencing. Also, it would help if you could enable the logger and provide us with your log messages when you get this error again. To enable the logger, instantiate your s3 instance with new AWS.S3({logger: console}). Also, have you tried uploading any other large file, in case the error has to do with that specific file rather than the file size? Using your code (modified to not reference 'this') and your specified versions of Node and aws-sdk, I successfully uploaded a 65MB file, so I'd like to see if there are other factors besides file size that could be causing this error.

@papayaglobal
Copy link
Author

Hi @LiuJoyceC,

The upload is part of a gulp task and the file is being written just before the upload.
When the file is small everything works as expected but when it's big (over 10MB) it fails.

here is the code:

// aws-task.ts

"use strict";

let AWS = require("aws-sdk");
import * as fs from "fs";

/**
 * Class for uploading a file to S3
 */
export class AwsTask {

    private filePath: string;

    /**
     * constructor
     * @param {string} filePath
     */
    constructor(filePath: string) {
        this.filePath = filePath;
    }

    public upload(): Promise<void> {
        let p = new Promise<void>( (resolve, reject) => {
            let body = fs.createReadStream(this.filePath);
            let params = {Bucket: "my-bucket", Key: "my-key.zip", Body: body};
            let s3 = new AWS.S3({logger: console});
            s3.upload(params)
            .send( (err, data) => {
                if (err) {
                    console.log(err);
                    reject(err);
                }
                else {
                    resolve();
                }
            });
        });
        return p;
    }
}

And this is the log:

[AWS s3 200 1.292s 0 retries] createMultipartUpload({ Bucket: 'my-bucket', Key: 'my-key.zip' })
[AWS s3 undefined 485.203s 3 retries] uploadPart({ Body: <Buffer 50 4b 03 04 14 00 00 08 00 00 b6 8a 36 48 00 00 00 00 00 00 00 00 00 00 00 00 07 00 00 00 61 62 62 72 65 76 2f 50 4b 03 04 14 00 00 08 00 00 b6 8a 36 ... >,
  ContentLength: 5242880,
  PartNumber: 1,
  Bucket: 'my-bucket',
  Key: 'my-key.zip',
  UploadId: '3MSnX7s.v7O_FWMD0ww82RGs.S5lsLxtmJWnz3mnjiRhsUlpwsiPIY3xTq_hTl6MyPFRR1OvYj7hX5payvZ2Fgqt7IPpojaPGEENJM2Tvg9dbD3qxPYqZxJbBlpsR46t' })
{ [TimeoutError: Connection timed out after 120000ms]
  message: 'Connection timed out after 120000ms',
  code: 'NetworkingError',
  time: Fri Jan 22 2016 17:30:13 GMT+0200 (IST),
  region: 'us-east-1',
  hostname: 'my-bucket.s3.amazonaws.com',
  retryable: true }

and the log of a smaller file:

[AWS s3 200 1.175s 0 retries] createMultipartUpload({ Bucket: 'my-bucket', Key: 'my-key.zip' })
  [AWS s3 200 2.397s 0 retries] uploadPart({ Body: <Buffer 4f 0b 51 71 c8 b5 52 d2 dd f8 08 67 bd bd ac ea e6 f5 5c 60 67 e1 5d c3 61 22 ac 94 82 3e 9b 16 9a 83 8c 11 b1 f1 30 af 7b 14 46 62 22 b3 21 cf 67 72 ... >,
    ContentLength: 72202,
    PartNumber: 2,
    Bucket: 'my-bucket',
    Key: 'my-key.zip',
    UploadId: 'wHfIDbZD56M3WghgujwCBC7qwfnnN4.s5yqdFnkr7qI4vQXizQhdu3puWSYLDLoF7j7iYcRtb..zYiVy0zQYWfEEkYj8LPs5OK4JONU0QHOyi_PLazcJUe9tdOhcKfZ0' })
  [AWS s3 200 58.755s 0 retries] uploadPart({ Body: <Buffer 50 4b 03 04 14 00 00 08 00 00 25 8a 36 48 00 00 00 00 00 00 00 00 00 00 00 00 08 00 00 00 62 61 63 6b 65 6e 64 2f 50 4b 03 04 14 00 00 08 00 00 25 8a ... >,
    ContentLength: 5242880,
    PartNumber: 1,
    Bucket: 'my-bucket',
    Key: 'my-key.zip',
    UploadId: 'wHfIDbZD56M3WghgujwCBC7qwfnnN4.s5yqdFnkr7qI4vQXizQhdu3puWSYLDLoF7j7iYcRtb..zYiVy0zQYWfEEkYj8LPs5OK4JONU0QHOyi_PLazcJUe9tdOhcKfZ0' })
  [AWS s3 200 1.601s 0 retries] completeMultipartUpload({ MultipartUpload: 
     { Parts: 
        [ { ETag: '"a61a5edba1f0c72441e150ed161de5c6"', PartNumber: 1 },
          { ETag: '"8aa5177502f0d50d852d089516524617"', PartNumber: 2 },
          [length]: 2 ] },
    Bucket: 'my-bucket',
    Key: 'my-key.zip',
    UploadId: 'wHfIDbZD56M3WghgujwCBC7qwfnnN4.s5yqdFnkr7qI4vQXizQhdu3puWSYLDLoF7j7iYcRtb..zYiVy0zQYWfEEkYj8LPs5OK4JONU0QHOyi_PLazcJUe9tdOhcKfZ0' })

@LiuJoyceC
Copy link
Contributor

Hi @oherman1 ,
Thanks for the additional information. You mentioned that the file is being written just before the upload. Are you performing the write operation synchronously or asynchronously? And if the file is being written asynchronously, are you ensuring that the file is finished writing before you begin the upload? It's possible that the small file is written quickly enough to not cause a problem, but the larger file takes longer to write, potentially allowing the upload to begin before the write operation finishes. Can you provide the portion of code in your gulpfile that defines the gulp task for the write operation and the gulp task that calls the upload?

@papayaglobal
Copy link
Author

Hi @LiuJoyceC,

Tried it in a different network and it's working, thanks very much for your help.

Ofer

@morbo84
Copy link

morbo84 commented Aug 25, 2017

@papayaglobal A bit late to the party, but I think your problem is related to what is described here and here.

I've noticed the same behavior with my poor home connection (0.5 Mbit/s in upload) and I eventually found that if you can't upload at least 5MB in the timeout period, you are screwed; furthermore, by default the S3.upload() method will spawn several uploads concurrently, so the probability of hitting the timeout with a slow network connection are even greater.

@lock
Copy link

lock bot commented Sep 29, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs and link to relevant comments in this thread.

@lock lock bot locked as resolved and limited conversation to collaborators Sep 29, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants