Chunked response fix #84

merged 3 commits into from Mar 11, 2013

2 participants


This fixes an issue with chunked responses (large amounts of data). It removes the last \0 from every response before it gets parsed. If we keep the \0, it will be merged with the total response and crash the parser.

Parser will otherwise try to parse the wrong line (command), as it thinks that the command is already received.

The error was:

   VALUE invoices::#17748 2 196
   error - 21/01 - 22:17 Caught exception: SyntaxError: Unexpected token V
   info  - 21/01 - 22:17 SyntaxError: Unexpected token V
    at Object.parse (native)
    at Socket.value (/node/node_modules/memcached/lib/memcached.js:432:17)
    at EventEmitter.rawDataReceived (/node/node_modules/memcached/lib/memcached.js:624:51)
    at EventEmitter.BufferBuffer (/node/node_modules/memcached/lib/memcached.js:584:12)
    at Socket.bowlofcurry (/node/node_modules/memcached/lib/utils.js:109:15)
    at Socket.EventEmitter.emit (events.js:96:17)
    at TCP.onread (net.js:392:31)

I've got it running for a while now, without any issues loading up millions of items in seconds for testing.


Thanks a lot for the pull request. Could you add a small test against it? I don't want to introduce any regressions when I release my 1.0 refactor.


You are right, gonna check if I can make something that doesn't insert millions of data to test it. Because it needs network delay to create the case, locally it doesn't provide us with chunked responses. Thats the main reason I removed my initial test-case. Any suggestions on reproducing network delays?


For the new memcached parser I have developed a memcached server response fuzzer to handle edge cases like these where the parse is the root cause of the issue.

Maybe you could setup a node based memcached server for your test and add some random setTimeout's in there. Other then that, I don't really know.

@3rd-Eden 3rd-Eden merged commit 70a31eb into 3rd-Eden:master Mar 11, 2013

1 check passed

Details default The Travis build passed

@renedx just merged this, just hope that this will also be fixed in my other parser.


Thanks! I tried to reproduce it with tests. If I find something that keeps working (as test should) I will add it anyway so you can test your new parser on it too :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment