Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

S3.putObject RangeError: Maximum call stack size exceeded node v0.10.15 #158

Closed
Meekohi opened this issue Sep 17, 2013 · 15 comments
Closed
Labels
duplicate This issue is a duplicate. third-party This issue is related to third-party libraries or applications.

Comments

@Meekohi
Copy link

Meekohi commented Sep 17, 2013

This bug does not exist with node v0.10.13, but does on v0.10.15, I have not tested v0.10.18

When doing putObject with a large file (100MB+), node issues this warning many time:

(node) warning: Recursive process.nextTick detected. This will break in the next version of node. Please use setImmediate for recursive deferral.

and then crashes:

RangeError: Maximum call stack size exceeded

Best quick-fix solution is to use a stream instead (probably what you wanted anyway):

var stream = fs.createReadStream('filename.zip');
var uploadOptions = {
                Bucket: s3bucket,
                Key: s3key,
                ACL: "public-read",
                ContentType: "application/zip",
                Body: stream
};
s3.putObject(uploadOptions, function(err, data){ ... });
@lsegal
Copy link
Contributor

lsegal commented Sep 17, 2013

This looks like a duplicate of #150, which is not an issue in the SDK. Given that I was not able to reproduce this with v0.10.16 I would recommend upgrading Node.js, since it looks like a regression. I'm going to close this as a third-party issue.

@springmeyer
Copy link

Noting here that I also saw this with node v0.10.26 on windows.

@lsegal
Copy link
Contributor

lsegal commented Mar 17, 2014

@springmeyer can you provide a test case that reproduces the error for you?

@springmeyer
Copy link

hi @lsegal - trying to create a reproducible testcase for the process.nextTick error, but having trouble. I don't have access to the machine where the warnings happened right now (since they were on a windows appveyor box - logs at https://ci.appveyor.com/project/BergWerkGIS/node-mapnik/build/1.0.17#L939). So I'm trying to replicate locally on OS X. In the process hitting a different issue, which perhaps is related? I'm seeing that with v2.0.0-rc11 an error inside of the s3.putObject callback or otherwise some invalid state passed into it may cause the s3.putObject callback to never being called or it may lead it being called multiple times. Here is one testcase (an invalid zero length buffer being passed as the body) that will lead to a hang: https://gist.github.com/springmeyer/0a4e06bdec994db751b2. I'm seeing that with that testcase node test.js hangs for me on OS X with node v0.10.26.

@springmeyer
Copy link

Added a readme and second test to https://gist.github.com/springmeyer/0a4e06bdec994db751b2. Neither replicate the process.nextTick issue. But I did see it once while working to create these test cases.

@lsegal
Copy link
Contributor

lsegal commented Mar 17, 2014

@springmeyer I fixed the zero-buffer issue, investigating the second issue now.

@lsegal
Copy link
Contributor

lsegal commented Mar 18, 2014

@springmeyer I've just pushed a fix for #248 which should fix both issues in your gist. The process.nextTick issue probably will not be resolved by this, though.

@springmeyer
Copy link

@lsegal - excellent, thank you. I can confirm that both issues are also fixed in my local testing with latest master. As far as the process.nextTick mystery I'm planning on moving to multipart uploads (mapbox/node-pre-gyp#51) so I'm unlikely to hit this again. But I'll certainly post back if I do.

Btw, are there any code samples for how to best implement multipart uploads beyond the docs you mention at #173 (comment)?

@springmeyer
Copy link

In a last try to uncover the process.nextTick issue I noticed something else slightly odd (but maybe intended): If you pass an empty string for the Key then instead of an error being thrown the upload succeeds and becomes named {Key}. Testcase:

var s3 =  new AWS.S3();
var s3_obj_opts = {  Body: fs.readFileSync('./e'),
                     Bucket: 'node-pre-gyp-tests',
                     Key: '',
                  };
s3.putObject(s3_obj_opts, function(err, resp){
    if (err) throw err;
});

lsegal added a commit that referenced this issue Mar 18, 2014
@lsegal
Copy link
Contributor

lsegal commented Mar 18, 2014

Thanks for finding all of these issues, @springmeyer. I've just resolved the last one in the above commit.

@springmeyer
Copy link

Cheers. I'll create new issues for anything else I run into. Thanks for the fast fixes.

lsegal added a commit that referenced this issue Mar 26, 2014
lsegal added a commit that referenced this issue Apr 24, 2014
@rclark
Copy link
Contributor

rclark commented Oct 10, 2014

I came across this issue when providing a large buffer as the Body to an uploadPart operation. Working with a 2GB file, I could successfully upload it in 5MB parts, but could not in 50MB parts.

This is definitely underlain by an upstream node.js issue, and it may be fixed in v0.12 nodejs/node-v0.x-archive#6065 (comment), but I think you may still fall into another trap like nodejs/node-v0.x-archive#7401 and nodejs/node-v0.x-archive#8291.

I was able to work around the issue by wrapping my buffer in a readable stream, almost identically to aws-sdk's bufferToStream function, but providing a highWaterMark to the readable stream. This drops you out of this read loop in node.js and avoids the recursion problems that are reported here and in other issues. You can accomplish a similar result with a setImmediate function to make bufferToStream feign "real I/O".

Here's a gist showing pass/fail cases of the bufferToStream function: https://gist.github.com/rclark/0a0d40dfd11b52e05030

@lsegal
Copy link
Contributor

lsegal commented Oct 10, 2014

Setting a highWaterMark seems like a reasonable and fairly simple thing to do on our part. Are there any downsides that I might be missing?

@rclark
Copy link
Contributor

rclark commented Oct 10, 2014

Are there any downsides

Yeah, maybe, but I'm honestly not sure. The number that you choose does feel somewhat arbitrary. At first, I tried setting it to zero, and that seemed to effectively stop processing. I ended up setting it to 5MB, since I was confident that buffers of that size did not trigger the recursion errors.

@lock
Copy link

lock bot commented Sep 29, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs and link to relevant comments in this thread.

@lock lock bot locked as resolved and limited conversation to collaborators Sep 29, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
duplicate This issue is a duplicate. third-party This issue is related to third-party libraries or applications.
Projects
None yet
Development

No branches or pull requests

5 participants