Skip to content

fix eventemitter madness, be careful around pausing here too #206

Closed
wants to merge 2 commits into from

3 participants

@quartzjer

Much happier :)

@kristjan

Seems legit. This does what exactly; pushes the network queue over to the remote host so we don't blow out our RAM?

without this it oversubscribes the drain function, I don't know for sure if there's any functional difference in the block and tackling, but it definitely is friendlier to event emitter when there's lots of data writes happening :)

@indexzero
nodejitsu member

@quartzjer This looks good. So good that I would ask that you update other places you see flushed = *.write(... with this pattern.

@quartzjer

Ok, I honestly tried to fix this elsewhere, but I got in the weeds quickly on knowing what kind of larger impact it would have, I don't know the internals of http-proxy well enough to have any confidence in twiddling these bits. It seems that for all the instances of tracking flushed, paused, and attaching drain listeners, that there needs to be some way to check if the drain listener is attached for-the-specific-paired-socket, and using a paused variable could actually result in some edge cases of a full standstill.

I'm not sure how to do this properly in all the other places with my relatively shallow understanding of http-proxy, and may not have the time to do a deep dive and learn it all any time soon :(

cc @temas

@quartzjer

I decided to go the safest route possible, and just have a variable per instance. I tested this everywhere we were seeing the event emitter leak on the locker project and it runs clean now. Should be good to go I think?

@quartzjer

Does the current patch look good?

@indexzero
nodejitsu member

Things have diverged so much from this that I can't merge it in. If you encounter this in v0.9.0 when it is released let us know. Sorry.

@indexzero indexzero closed this Mar 9, 2013
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.