Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP

Loading…

allow putStream to accept non fs based streams #32

Closed
wants to merge 1 commit into from
@taf2

when given a content length and content type allow putStream to upload arbitrary streams without using mime.lookup or fs.stat

@taf2 taf2 when given a content length and content type allow putStream to uploa…
…d arbitrary streams without using mime.lookup or fs.stat
af64397
@Fauntleroy

Will this be merged anytime soon?

@xmilliard

I'm interested in this too.

May it be more simple by adding content-length and content-type only if not already present in headers.

In place of :

     var req = self.put(filename, utils.merge({
        'Content-Length': stat.size
      , 'Content-Type': mime.lookup(stream.path)
    }, headers));

We can use :

if(!headers['Content-Length']) {
  headers['Content-Length'] = stat.size;
}
if(!headers['Content-Type']) {
  headers['Content-Type'] = mime.lookup(stream.path);
}
var req = self.put(filename, headers);

if you want to override the standard behaviour, just prefilled the headers
accordingly.

What do you think ?

Xavier

@taf2

I like that!

@xmilliard

Great, how do we push this modification ? You modify your patch or do you prefer i send another one ?

@taf2

sending another one would be great, i may not get back to this for a little while

@xmilliard

I have something working for me in my repo. Is somebody interested in testing before proposing update on master ?
We add the modification to be able to stream content directly from request body to s3 (proxy like).

@cjroebuck

Id like to test that, we want to stream directly to s3, rather than buffering the file up on the server, and then streaming the file.

@xmilliard

We use it regularly, works ok for us

@ThiagoMiranda

xmilliard how did you do it?

I'm using the multipart upload from Amazon with a request.write(data) as I receive the data object. It's working partially but sometimes I get a "Assertion Error"

@timoxley

@xmilliard could you add some tests to show usage and verify it works as intended?

@xmilliard

Hello all, i just add a small example in test dir (knox-streaming-sample.js).
Just tell me if you need more infos.

@sudhir28

Hi i m very new to git and nodejs can you help me in executing the demo test using knox on my s3.
I am not getting how do i start which modifications i do need to test.
Please consider me.
Thank you.(sudhir.pingale@gmail.com)

@xmilliard

Hello Sudhir,

assuming your project directory is MY_DIR you must have a node_modules directory inside with all your nodejs modules.
You retrieve the patched knox module using git clone (you must have you ssh key available). Alternatively you can checked it out in another place and use npm to link it.

Then go back to your MY_DIR directory and add the equivalent code of knox-streaming example (knox/tests/knox-streaming-example.js) in your source code.

Change the line describing your amazon credentials.
var client = s3.createClient({key:'XXX', secret:'YYYY', bucket:'ZZZ' });

and run.

If you want to execute knox-streaming-example.js directly in tests directory you have to change the line :
var s3 = require('knox'); -----> var s3 = require('../../knox');

Is it ok for you ?

@sudhir28
@loqur

+1 want this...have to fork my own to get it :)

@spatical

+1 for this pull request, this worked for me perfectly to stream an image directly to s3 rather than saving it on our web server first.

@taf2

so when i created this issue it was for this little hack i was doing to move files from ftp to s3. here's how i did it: https://github.com/taf2/ftp2s3 also here is the line of code where this was handy: https://github.com/taf2/ftp2s3/blob/master/app.js#L190

@KyleAMathews

@spatical or someone else -- could you post some sample code on how to stream a file upload directly to s3 using the pull request? The sample code where the Google Logo is streamed worked perfectly. However when I try to stream a file upload from node-formidable, I get a 400 error from Amazon with the following message:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>RequestTimeout</Code><Message>Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.</Message><RequestId>AF90F8223595C0A4</RequestId><HostId>iZCis1ErwCCxjMypU4xbNoSXW889rwyV7FAdvTD5IS+TkOwuCLJej8CWJcy30mTj</HostId></Error>

Can anyone help? Thanks!

@xmilliard

@ KyleAMathews, can you put a sample of a not working code ?

@KyleAMathews

Haha, that would be useful.

I'm using the basic example from node-formidable to handle the file upload and provide the stream to pipe to Knox. The Knox client is created above this code snippet and definitely works as your little http streaming example works perfectly.

form = new formidable.IncomingForm()                                             
form.onPart = (part) ->                                                                    
  if not part.filename                                                           
    # let formidable handle all non-file parts                                   
    form.handlePart(part);                                                       

  headers = {}                                                                   
  headers['Content-Length'] = form.bytesExpected                                 
  headers['Content-Type'] = part.mime                                            
  #headers['x-amz-acl'] = 'public-read'                                          
  client.putStream(part, '/' + part.filename, headers,                           
    (err, res, hash, uploadSize) ->                                              
      if err                                                                     
        console.log err                                                           
      else                                                                       
        console.log res.headers                                                  
        console.log res.statusCode                                               
        res.on 'data', (chunk) ->                                     
          decoder = new StringDecoder('utf-8')                                   
          console.log 'body chunk:', decoder.write(chunk)                        
        #console.log res                                                         
        console.log("Successfully stored "+uploadSize+" bytes with hash "+hash)
  )             
@xmilliard

if bytesExpected is related to the complete form content, it will be more than the file length.
If you set a Content-Length value greater than the file size, amazon is waiting for bytes that never come.

@KyleAMathews

Ah! That makes a lot of sense. I tested it by manually setting the content-length and it uploaded successfully to Amazon. So that's very nice.

But this raises another question -- that Googleing isn't answering is-- how to get the actual file size? I tested uploading a number of files and the number of non-file bytes was different each time w/o seemingly any pattern. My app only is supporting POSTing one file at a time.

I know there's the HTML5 File API (http://i.ndigo.com.br/2012/01/javascipt-checking-the-file-size/) but that doesn't work in IE9.

@domenic

Closing in favor of #72; this is a bit stale and confusing.

@domenic domenic closed this
@domenic domenic referenced this pull request from a commit
Commit has since been removed from the repository and is no longer available.
@domenic domenic referenced this pull request from a commit
@domenic domenic Make `putStream` put any stream (finally). Closes #72.
Also reimplement `putFile` in terms of it, so that it streams files from disk. This loses the MD5 sadly, but it's a small price to pay.

Updated README with an example of a HTTP stream instead of another file stream, and added a test to make sure that example actually works.

Related: #14, #32, #48, #57.
18afc26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Commits on May 17, 2011
  1. @taf2

    when given a content length and content type allow putStream to uploa…

    taf2 authored
    …d arbitrary streams without using mime.lookup or fs.stat
This page is out of date. Refresh to see the latest.
Showing with 20 additions and 7 deletions.
  1. +20 −7 lib/knox/client.js
View
27 lib/knox/client.js
@@ -178,21 +178,34 @@ Client.prototype.putStream = function(stream, filename, headers, fn){
fn = headers;
headers = {};
};
- fs.stat(stream.path, function(err, stat){
+ var startStream = function(err, stat) {
if (err) return fn(err);
// TODO: sys.pump() wtf?
- var req = self.put(filename, utils.merge({
- 'Content-Length': stat.size
- , 'Content-Type': mime.lookup(stream.path)
- }, headers));
- req.on('response', function(res){
+
+ headers['Content-Length'] = stat.size;
+
+ // skip mime.lookup
+ if (!headers['Content-Type']) {
+ headers['Content-Type'] = mime.lookup(stream.path);
+ }
+
+ var req = self.put(filename, headers);
+ req.on('response', function(res) {
fn(null, res);
});
+
stream
.on('error', function(err){fn(null, err); })
.on('data', function(chunk){ req.write(chunk); })
.on('end', function(){ req.end(); });
- });
+ };
+ var contentLength = headers['Content-Length'];
+ if (contentLength && contentLength > 0) { // skip fs.stat
+ startStream(null, {size: contentLength});
+ }
+ else {
+ fs.stat(stream.path, startStream);
+ }
};
/**
Something went wrong with that request. Please try again.