This repository has been archived by the owner. It is now read-only.

Fix avoid apps from retaining references to the upgrade head and large s... #4660

Closed
wants to merge 1 commit into
from

Conversation

Projects
None yet
4 participants
@jmatthewsr-ms

For consideration, at the very least to raise awareness to apps handling buffers from node core.

Please see https://github.com/jmatthewsr-ms/node-slab-memory-issues for an explanation. Three modules are identified that suffer from high memory usage by retaining a reference to the upgrade head data.

@tjfontaine

This comment has been minimized.

Show comment
Hide comment
@tjfontaine

tjfontaine Jan 25, 2013

Did you run any of the benchmarks after this change?

It seems like something better handled in the problem modules, and I noticed you've already filed such issues.

Did you run any of the benchmarks after this change?

It seems like something better handled in the problem modules, and I noticed you've already filed such issues.

@jmatthewsr-ms

This comment has been minimized.

Show comment
Hide comment
@jmatthewsr-ms

jmatthewsr-ms Jan 25, 2013

Did not run any benchmarks.

Discussed this with Bert and after the changes, there still is an 8k retention per websocket connection due to Buffer's own internal 'pool', which is a slab buffer as well. I closed the other pull requests because I believe that this will continue to occur and if third parties wanted to solve this they would have to convert the Buffer to a string, or similar, to completely remove the retention. Seems non-ideal for handing upgrade events from http.js. Also, if this is solved in node core the changes I proposed will re-create the 8k/connection issue. Currently it doesn't appear that it's possible to create a Buffer object in node < poolSize (8k) without manually creating and assigning a SlowBuffer after creation, which is brittle. So I think this warrants some additional discussion. Thoughts?

Did not run any benchmarks.

Discussed this with Bert and after the changes, there still is an 8k retention per websocket connection due to Buffer's own internal 'pool', which is a slab buffer as well. I closed the other pull requests because I believe that this will continue to occur and if third parties wanted to solve this they would have to convert the Buffer to a string, or similar, to completely remove the retention. Seems non-ideal for handing upgrade events from http.js. Also, if this is solved in node core the changes I proposed will re-create the 8k/connection issue. Currently it doesn't appear that it's possible to create a Buffer object in node < poolSize (8k) without manually creating and assigning a SlowBuffer after creation, which is brittle. So I think this warrants some additional discussion. Thoughts?

@trevnorris

This comment has been minimized.

Show comment
Hide comment
@trevnorris

trevnorris Jan 25, 2013

Just a couple thoughts. This sounds like a manifestation of the issue I demonstrate in GH-4525. It's a known issue and after I get done with my two open patches I plan on looking into it.

Also you should be able to change the pool size by assigning a value to Buffer.poolSize. I've created several benchmarks that demonstrate how this affects performance and memory usage (which are part of those included in one of my current patches).

I'm wondering what people mean by slab buffers? Slabs are just a wrapper around the Buffer class, and any memory issues that exist are because of how Buffers handle their external memory pools.

Just a couple thoughts. This sounds like a manifestation of the issue I demonstrate in GH-4525. It's a known issue and after I get done with my two open patches I plan on looking into it.

Also you should be able to change the pool size by assigning a value to Buffer.poolSize. I've created several benchmarks that demonstrate how this affects performance and memory usage (which are part of those included in one of my current patches).

I'm wondering what people mean by slab buffers? Slabs are just a wrapper around the Buffer class, and any memory issues that exist are because of how Buffers handle their external memory pools.

@bnoordhuis

This comment has been minimized.

Show comment
Hide comment
@bnoordhuis

bnoordhuis Jan 25, 2013

Member

This patch won't help, it's effectively a no-op (but a no-op that consumes more CPU cycles). When you slice, you retain a reference to the parent buffer.

As @trevnorris points out, this is a variation on #4525. The best thing to do is raise awareness of how buffers work and that you should not keep them around unless absolutely necessary.

Member

bnoordhuis commented Jan 25, 2013

This patch won't help, it's effectively a no-op (but a no-op that consumes more CPU cycles). When you slice, you retain a reference to the parent buffer.

As @trevnorris points out, this is a variation on #4525. The best thing to do is raise awareness of how buffers work and that you should not keep them around unless absolutely necessary.

@bnoordhuis bnoordhuis closed this Jan 25, 2013

@jmatthewsr-ms

This comment has been minimized.

Show comment
Hide comment
@jmatthewsr-ms

jmatthewsr-ms Jan 25, 2013

Updated https://github.com/jmatthewsr-ms/node-slab-memory-issues to add a comparison of memory usage with this pull request and without it. I wouldn't characterize this as a no-op. It's swapping a very large slab buffer (1 or 10MB) with a much smaller one (8k).

Some very popular and well reviewed projects have unknowingly hit this issue and it seems likely others will as well. We experienced this in a production system, effectively creating a DoS scenario.

Your consideration is very much appreciated.

Updated https://github.com/jmatthewsr-ms/node-slab-memory-issues to add a comparison of memory usage with this pull request and without it. I wouldn't characterize this as a no-op. It's swapping a very large slab buffer (1 or 10MB) with a much smaller one (8k).

Some very popular and well reviewed projects have unknowingly hit this issue and it seems likely others will as well. We experienced this in a production system, effectively creating a DoS scenario.

Your consideration is very much appreciated.

@bnoordhuis

This comment has been minimized.

Show comment
Hide comment
@bnoordhuis

bnoordhuis Jan 25, 2013

Member

I wouldn't characterize this as a no-op. It's swapping a very large slab buffer (1 or 10MB) with a much smaller one (8k).

Okay, I'll grant you that but I still don't think it's the right solution. For one, this could be accomplished with zero copying, simply by shrinking the slab buffer. But there's a reason it's that big, of course, and that's because it's faster. For that matter, the slabs are 1024 kB each now but 4096 kB or 8192 kB would probably be faster still.

As mentioned in the other issue, given the choice between optimizing for throughput or memory, node.js will pick throughput every time.

Member

bnoordhuis commented Jan 25, 2013

I wouldn't characterize this as a no-op. It's swapping a very large slab buffer (1 or 10MB) with a much smaller one (8k).

Okay, I'll grant you that but I still don't think it's the right solution. For one, this could be accomplished with zero copying, simply by shrinking the slab buffer. But there's a reason it's that big, of course, and that's because it's faster. For that matter, the slabs are 1024 kB each now but 4096 kB or 8192 kB would probably be faster still.

As mentioned in the other issue, given the choice between optimizing for throughput or memory, node.js will pick throughput every time.

@jmatthewsr-ms

This comment has been minimized.

Show comment
Hide comment
@jmatthewsr-ms

jmatthewsr-ms Jan 25, 2013

Throughput goes to 0 when your server runs out of memory :). I fully understand the usage of large memory buffers, that is not in dispute. This is a specific issue to http upgrade handling that at least 3 well known libs have come across. Is there truly a performance penalty to this patch? Thanks.

Throughput goes to 0 when your server runs out of memory :). I fully understand the usage of large memory buffers, that is not in dispute. This is a specific issue to http upgrade handling that at least 3 well known libs have come across. Is there truly a performance penalty to this patch? Thanks.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.