Chunked encoding & forever() working? #401

Closed
vwal opened this Issue Jan 1, 2013 · 9 comments

Projects

None yet

5 participants

@vwal
vwal commented Jan 1, 2013

I've been looking at request for the last couple of days and am now stuck with it. I'm looking for some pointers with the below issues (and also wanted to confirm that the features I'm trying to get working are not broken).

First, if I set headers like so:

headers: { 'Transfer-Encoding': 'chunked' }

I get nothing back (the response callback is never called). The web service I'm trying to talk to only accepts chunked connections, so I have to have it set (without "transfer-encoding: chunked" header the service returns an error "Monitor request must be done on a chunked encoding connection.") So, does request handle chunked connections? If so, are there any special considerations getting it to work?

I can get a "chunked" connection working against the API I'm working with using https.request, but the whole point I'm looking at request instead is that I'd like to keep the TCP connection open and multiplex multiple requests over it (it's the recommended way the web service in question should be accessed). https.request shuts down the socket right away, and so request's "forever()" might solve the problem.

The second question is about forever(). How do I enable it? I saw a suggestion in Google Groups to access forever like so:

var request = require('request').forever();

But when I do that, node exits with:

/usr/local/lib/node_modules/request/forever.js:100 options.port = port

Thanks for any insights on this!

@mikeal
Member
mikeal commented Jan 1, 2013

don't set the content-encoding header by hand, node core's http will automatically use chunked encoding if you don't set a content-length.

@vwal
vwal commented Jan 1, 2013

Yes, node's core http(s) seems to handle chunked encoding automatically, but it also closes the socket immediately after the request has been made which I'm trying to avoid – there's no way to send another request, say, couple of minutes later using the same socket.

Is there a way to enable chunked connections with your request module, and also employ the "forever" connections that are not automatically closed? In case of this application I'm doing POST over https.

@mikeal
Member
mikeal commented Aug 27, 2014

Is this still an issue?

This is so old I'm closing, if it is actually still an issue just let me know and I'll re-open.

@mikeal mikeal closed this Aug 27, 2014
@odnarb
odnarb commented May 7, 2015

I have to say, this is an issue.. every time I attempt chunked transfer it;s a total fail. Either that or my attempt is wrong. However, I would expect there to be a series of examples and/or documentation for such functionality if it's supported.

Can you help out here?

The functionality I expect is asynchronous so I don't know if request is currently supporting this:
-I create a chunked transfer request and open the connection
-Once the connection is established I can send chunks of data from some readable stream.
-As some event happens (such as reading a row from a database), I want to send that chunk of data to the server in not only the same session, but the same pipe.

Thanks for any insight.

@odnarb
odnarb commented May 7, 2015

After hours of struggling and testing I came up with a solution. It works, that's all I care. I don't care if it looks terri-bad. I will post the solution to the examples area or something. Then I'll update this thread.

@Sergey80

odnarb, I guess you forgot to update the thread :) I also have this issue.

@r0hitsharma

@odnarb, could you give an example of what you did for this issue? i think i'm facing a similar problem.

@odnarb
odnarb commented Feb 18, 2016

Yea sorry about that! It's a solution that works. Let me see if I can do that tomorrow. I am uber busy at work.

It's on my white board so no worries it will NOT be forgotten! :)

@odnarb
odnarb commented Feb 19, 2016

I can update this with more explanation, but I'd rather get feedback from people. Is my method BS? I hacked it together to make it plain work. Oh well..love it/hate it.

Client side:
-open a stream, open your db connection
-exec query
-stream DB rows
-win

var request = require('request');
var _ = require('underscore');

//get data from db
var extractQuery = "select top 1000 * from mytable";

var chunkedPostOptions = {
    method: 'POST',
    rejectUnauthorized: false, //self-signed ssl
    headers: {
        'Transfer-Encoding': 'chunked',
        'Content-Type': 'multipart/form-data',
        'Cookie': "some-cookie here"
    },
    qs: { 
        '_csrf': csrf
    },
    baseUrl: 'https://localhost'
};

var request = require('request');
var postReq = request.defaults(chunkedPostOptions);

var stream = require('stream');
var rs = new stream.Readable();
rs._read = function noop() {}; // redundant?

//open the pipe here
rs.pipe(postReq('/stream_to_file', function(err,res, body){
    if( err ){
        console.log("Stream Pipe Error!");
        console.log(err);
    }
}));

//Init db connection & config
var dbConnection = require('tedious').Connection;
var dbConfig = {
    server: 'some-far-away-server.net',
    userName: 'me',
    password:'myp@ss',
    options: {
        database: 'mydb',
        useColumnNames: false //tedious is weird, use this..
    }
};

dbConnection = new dbConnection(dbConfig);
var dbRequest = require('tedious').Request;
var dbSetSentRequest = '';

//prep sql exec handlers
var executeStatement = function(cb) {
    dbRequest = new dbRequest(extractQuery, function(err, rowCount) {
        if (err) {
            console.log(err);
            cb();
        } else {

            totalRows = rowCount;

            if( totalRows > 0 ) {
                if( rowsProcessed == totalRows ) {
                    //end stream
                    rs.push(null);
                    cb();
                } //endif
            } else {
                cb();
            } //endif
        } //endif
    });

    dbRequest.on( 'row', function handleRow( cols ) {
        rowsProcessed+=1;
        //keep pushing content into the stream
        //get the row into a clean series of properties like:  { id: 123, first_name: 'John', last_name: 'Smith' }

        var row = {};
        _.each(cols, function(col){
            if( col.value === null ) {
                row[col.metadata.colName] = null;
            } else if( col.metadata.userType !== 80){ //skip unsigned ints (timestamps) formatted like 0x0000000024D61832
                row[col.metadata.colName] = col.value.toString();
            }
        });

        var chunkedBody = {
            row_number: rowsProcessed,
            content_type: 'application/json',
            content: JSON.stringify(row)
        };
        rs.push( JSON.stringify(chunkedBody) + '\r\n' );
    });
    dbConnection.execSql(dbRequest);
};

//actual connection handler and start execution
var rowsProcessed = 0;
var totalRows = 0;
dbConnection.on('connect', function(err) {
    // If no error, then good to go...
    if(err){
        console.log(err);
    } else {
        executeStatement(function(){
            dbConnection.close();
        });
    }
});

Server-side:
Some of this is sails.js specific..this was pulled from a sails.js controller. It's meant to capture chunks of data and store it in the session if the client side just to happens to send a really long row of columns & data..so then we'll patch that together until we get the end of the JSON-formatted row that's incoming. So then the file that's written to gets a nicely formatted JSON row PER line.
WIN.

    stream_to_file: function(req, res) {
        //WRITE TO FILE, CHUNKED TRANSFER
        // console.log( "Processing chunk transfer!" );

        var UUIDGenerator = require('node-uuid');

        var filename = 'my-stream-' + UUIDGenerator.v4() + '.txt';

        var fs = require('fs');
        var fd = null;
        var flags = 'w'; //see reference for flags https://nodejs.org/api/fs.html
        fd = fs.openSync(sails.config.paths.tmp + "/uploads/" + filename, flags ); //sails.js specific code

        req.on('data', function(chunk) {
            //console.log( "Got a chunk!" );
            //console.log("chunk length: " + chunk.toString().length);

            var row = "";
            var patchedChunk = '';

            try {
                if( req.session.chunkedItem == undefined || req.session.chunkedItem == null ) {
                    //nothing was saved in session earlier, just try to parse it.
                    row = chunk.toString();
                } else {
                    // console.log("Patching saved chunk...");
                    patchedChunk = req.session.chunkedItem + chunk.toString();
                    row = patchedChunk.toString();

                    //clear out the saved chunk in the session object for later use if this was a complete JSON object
                    delete req.session.chunkedItem;
                    patchedChunk = '';
                } //endif

                if (fd) {
                    var bytesWritten = fs.writeSync(fd, row);
                    //console.log("wrote " + bytesWritten + " bytes");
                } else {
                    return res.serverError();
                }
            } catch(e) {
                // console.log("Chunk too long, trying to concatenate to one big string.")
                //append to the string saved in the session object as there could be more chunks
                if(req.session.chunkedItem == undefined) {
                    req.session.chunkedItem = chunk.toString();
                } else {
                    req.session.chunkedItem = req.session.chunkedItem + chunk.toString();
                }
                //console.log(e);
            } //end try/catch
        });

        req.on('end', function(){
            // print the output in console
            // console.log("chunk processing complete!");
            fs.closeSync(fd);
            return res.send(200);
        });
    },
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment