You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now, 16k blocks are read fairly early and then kept in a queue of requested pieces. So a client requesting 32 blocks will have a memory consumption of at least 32 x 16K = 512K. That is far too much, if we expect there to be 40-100 connections. It amounts to something along the lines of 20-50 megabytes of waste.
It is possible to optimize this. The first part is simply to only request this at the last possible time so the fetched data can be thrown away after it has gone down the wire. It will also pave the way for the fast extension SUGGEST option.
The text was updated successfully, but these errors were encountered:
Right now, 16k blocks are read fairly early and then kept in a queue of requested pieces. So a client requesting 32 blocks will have a memory consumption of at least 32 x 16K = 512K. That is far too much, if we expect there to be 40-100 connections. It amounts to something along the lines of 20-50 megabytes of waste.
It is possible to optimize this. The first part is simply to only request this at the last possible time so the fetched data can be thrown away after it has gone down the wire. It will also pave the way for the fast extension SUGGEST option.
The text was updated successfully, but these errors were encountered: