Deal with large result sets #16

wankdanker opened this Issue Feb 21, 2012 · 8 comments


None yet

2 participants


As reported by a user and verified by my own tests, when a query returns a large result set, the node process will eat up a lot of memory and possibly result in out of memory errors. This occurs because the resulting array of objects is built up in the binding and then passed to the callback when all the records have been returned.

We should find ways to mitigate this.


This will be solved in v0.5 and can be tested from here:

There is a result set object which has fetch() and fetchAll() methods. fetch() will get a single record asynchronously from the database.

var db = require('odbc')();, function (err) {
    db.queryResult('select * from orders', function (err, result) {
        //now you have to use some async callback management
        //to fetch all your rows. But here we just get the first one:

        result.fetch(function (err, data) {
            //the first record

I will leave this issue open until these changes make their way to an npm release.


This is done. fetch has been implemented so that individual rows can be requested and dealt with one at a time rather than allocating all the memory for a large set at once.

@wankdanker wankdanker closed this May 8, 2013
sarus commented Aug 18, 2013

Hello, I tried out the latest v0.5.25 branch and noticed that queryResult still resulted in large memory consumption (all the memory on my VM). To try and simplify things as much as possible I open a connection and run queryResult without ever fetching any of the rows. The queryResult call takes about 15 to 30 seconds before the provided callback is executed. During this time the memory usage steadily increases. The test query I run returns 5 million records.

Has anyone else noticed this behavior? Could I be doing something wrong or perhaps the libraries I'm using have known issues? Thanks for your great work on this module!

(If you'd like me to open a new issue on your fork I'd be happy to do that), function (err) {
    db.queryResult('select * from orders', function (err, result) {
        //queryResult call takes 15 to 30 seconds to return during which time memory consumption
        //increases until all memory is used on the machine (no calls to fetch or fetchAll are made
        //The process does not crash due to swap I think and the query does eventually complete but
        //but the spike in memory consumption makes me nervous.  

My specs:

Fedora 15
Node v0.10.13
node-odbc 0.5.25
postgres 9.1rc1


Hey @sarus,

I haven't seen this happen before (not that I have tested it extensively). I did just test this on Ubuntu with the MS SQL Native Client and selecting * from a table with 4.8 million rows had no noticable increase in memory usage when using queryResult. Memory usage did increase as I fetched rows.

I wonder if the Postgres ODBC driver is pre-loading the rows. Or using a client side cursor by default. I don't even know if those things are possible though. Something is definitely fishy here.

At this time, the only way I can think to really test if it is just the ODBC driver doing this is to write a test using ODBC in C that just connects to the database and then executes the same query. If you want to try this, you could use test/odbc-bench.c as a starting point.

To do that you would need to change the connection string at line 48, comment out lines 63, 68-70 and 73, and change the SQL query on line 66. Then compile with gcc odbc-bench.c -lodbc -o odbc-bench. Watch the memory usage while executing ./odbc-bench.

If you try this, then maybe we can pinpoint if this is a Postgres ODBC issue or something that is going on within this module.



@wankdanker wankdanker reopened this Aug 19, 2013
sarus commented Aug 20, 2013

Thank you for the great debugging suggestions. I went ahead and modified the test/odbc-bench.c file as you outlined in your post. The test program immediately started using up all available memory on my system and started using swap. Based on this test I did some research on the postgres odbc driver and found out that the default behavior is to fetch all rows into memory.

You can override this behavior by adding the UseDeclareFetch = 1 to odbc.ini. Sure enough after adding this flag the test program ran and returned immediately with no spike in memory usage. Thinking the problem was solved I ran a query from Node.js and my node program immediately crashed. The problem was now with the database.query(<query>,<callback>) call which was no longer working. Setting UseDeclareFetch = 1 was causing this new issue (I turned it off and on a couple times to confirm)

Essentially, the following code crashes hard when using UseDeclareFetch = 1 in odbc.ini

var client = new odbc.Database();, function(error, db){
     // This now fails (hard crash of the node process)
     client.query("SELECT * FROM testTable", callback);
     // The callback is never called and no error is provided

I went ahead and rebuilt node-odbc with the debug flag on. Here is the debug output:

ODBCConnection::Query : sqlLen=134, sqlSize=270, sql=S
ODBCResult::New m_hDBC=92F2E88 m_hDBC=92E6AC8 m_hSTMT=B5420418 canFreeHandle=1
ERROR: [WID-1] Died. //this is my own error catching routine

I then started adding DEBUG_PRINTF statements everywhere to try and narrow the problem down. The crash was happening in odbc_result.cpp on line 486:

self->columns = ODBC::GetColumns(self->m_hSTMT, $self->colCount);

Digging a little more the problem is in odbc.cpp in the ODBC::GetColumns method. The program actually crashes right at line 301 in the call to SQLColAttribute. Note that the loop does execute properly a couple of times before crashing.

ret = SQLColAttribute(hStmt, columns[i].index,

It turns out the call to SQLNumResultCols(hStmt, colCount) on line 283 appears to be returning a bogus value for the number of columns. colCount is properly initialized to 0 but then after the call to SQLNumResultCols the colCount value becomes 256 (it should be 7). I just added the following print statement to find this:

DEBUG_PRINTF("ODBC::GetColumns::NumColumns is %i\n", *colCount)

I'm definitely not a C++ person so I assume this is the correct way to format the print statement. Confused I went back to the odbc-bench.c program and tried adding in a call to SQLNumResultCols there to see if it returned the right number after the call to SQLExecDirect on line 66 (it worked as expected).

short colCount;
ret = SQLNumResultCols(m_hStmt, &colCount);
printf("Column Count: %i\n", colCount); //returns 7 which is the correct value

At this point I ran out of ideas on where to look. Sorry for the long post but wanted to be thorough. The bottom line is that the entire node-odbc module crashes hard when UseDeclareFetch = 1 is set for postgresODBC. My best guess at the moment is that it has something to do with the incorrect number of columns returned by SQLNumResultsCol inside of ODBC::GetColumns. I also tried updating to the latest postgres ODBC driver but the problem did not go away with the latest driver (same exact issue).

Any ideas??

Thank you!


Thanks for all the details. That's excellent. I found this thread which might be relevant:!searchin/pgsql.interfaces.odbc/UseDeclareFetch/pgsql.interfaces.odbc/fYbMue3Mcug/md_k-b-uDXEJ

Wondering if it is required to specify a value for FETCH in the connection string when using UseDeclareFetch=1. If the default value for FETCH is 0, maybe the driver does not know any information about the columns because it hasn't fetched any yet. I'll have to get my postgres server back up and see if I can reproduce the issue, but it might take me some time.

sarus commented Aug 21, 2013


So I think I figured out the issue and it was not what I was expecting. To start I did have the Fetch parameter set in odbc.ini so my file had the following:

# Turn on Fetch support to prevent postgres from loading results into memory
UseDeclareFetch = 1
# Number of rows to fetch each time
Fetch = 1000 

The crash however still occurred. I beefed up the odbc-bench.c test program to include fetching the data and it worked just fine (but my actual code kept crashing). I'm not entirely sure why but at some point I remember that the actual queries I was running from Node.js included SQL comments in them of the form -- which I did not use in odbc-bench.c. As it turns out this was the problem.

Query that causes crashes with UseDeclareFetch = 1:

SELECT * FROM testTable -- User: Admin

For some reason when UseDeclareFetch is turned on the comment crashes node-odbc (almost certainly because of a bug somewhere in postgresodbc). It works fine when UseDeclareFetch is not turned on (but then you can't return large result sets). The other comment style does work however so if you change the above query to use the /* syntax as shown below everything works.

SELECT * FROM testTable /* User: Admin */

Bottom Line: I think the bug is in the postgres odbc driver (not node-odbc) and I figured out a suitable workaround. Hopefully this helps someone else in the future!

Thank you again for your hard work on such a useful module. It's greatly appreciated!


Yeah, that's pretty unexpected! Glad you figured it out.

Based on what I have seen in some output logs of other folks debugging UseDeclareFetch issues, it looks like the Postgres driver wraps the original query in a cursor declaration and fetch command like the following:

declare "P01304" cursor for select * from your_original_statement -- this is a comment which blocks the following also added by the driver ;fetch 100 in "P01304

Where select * from your_original_statement -- this is a comment which blocks the following also added by the driver is the entire statement submitted by the user. Including comments by '--' would also comment out the fetch command that is added by the driver after the statement.

Wonder if the Postgres ODBC developers are aware of this.

Thank you for your effort in debugging the issue.

@wankdanker wankdanker closed this Aug 21, 2013
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment