Skip to content


Subversion checkout URL

You can clone with
Download ZIP


Large Files #21

wants to merge 2 commits into from

3 participants


We tried reading large (1-100GB) pcap files with node-pcap and ran into some problems.

It seemed as though the file reads were eating up the entire event loop, and nothing else was being allowed through (database calls, http calls, anything else we tried to do). This basically worked fine on smaller files that we tried (10-50mb), because it would read the entire file contents, and then all the other database and http calls would get processed after, and you wouldn't notice anything wrong.

With the larger files, obviously we will run out of memory when we have 100gb of packets queuing up tasks that cant get executed until all the packets are read.

After some debugging, we found that the IOWatcher callback was actually only getting called once ever (at the beginning), and that essentially all of the packet_ready() callbacks were getting called from just 1 binding.dispatch() call. This does not seem to be how things are supposed to work.

Essentially, we found that removing the do..while() loop in resolved the issue for us and lets node get back into its event loop. I'm not sure if there are side effects to this that we haven't seen yet, as I'm not 100% certain as to the need for the do..while() loop to begin with.

I tested this on 10GB files fine with no issues and no missed packets (comparing against tcpdump). I also tested this with live traffic and it seemed to perform fine there as well.

Can you let me know your thoughts on this change and possibly any side effects it may incur? Thanks a bunch.


@throughnothing can you provide resolution details?


@jmaxxz sorry, I was just closing this to get it off of my dashboard as I didn't think it was going to be accepted or merged at this point. I'm not working with this module anymore, nor do I have that problem anymore, but the code is here for someone else to take over if they run into the issue.


If you'd like me to re-open this or if you want to try to get a discussion started on it, I'm happy to do that if you're hitting this same issue.


No, don't reopen. Just was curious.


I do think we should come up with some way to handle this large files situation, but I think that most users reading files aren't doing anything else on the event loop so they don't notice. If someone else comes along with this problem in the future, I guess we can tackle it then.


Yeah, for all I know, it's not even an issue anymore. The original code I modified is long gone.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Commits on Mar 2, 2011
  1. made this a little cleaner

    William Wolf authored
This page is out of date. Refresh to see the latest.
Showing with 2 additions and 5 deletions.
  1. +2 −5
@@ -99,11 +99,8 @@ Dispatch(const Arguments& args)
Local<Function> callback = Local<Function>::Cast(args[1]);
- int packet_count, total_packets = 0;
- do {
- packet_count = pcap_dispatch(pcap_handle, 1, PacketReady, (u_char *)&callback);
- total_packets += packet_count;
- } while (packet_count > 0);
+ int total_packets = 0;
+ total_packets = pcap_dispatch(pcap_handle, 1, PacketReady, (u_char *)&callback);
return scope.Close(Integer::NewFromUnsigned(total_packets));
Something went wrong with that request. Please try again.