Skip to content
This repository has been archived by the owner on Aug 2, 2023. It is now read-only.

Simplify BufferPool #67

Closed
KrzysztofCwalina opened this issue Mar 27, 2015 · 5 comments
Closed

Simplify BufferPool #67

KrzysztofCwalina opened this issue Mar 27, 2015 · 5 comments
Assignees

Comments

@KrzysztofCwalina
Copy link
Member

The pool is quite complex and I don't think the complexity is worth it. Interlocked operations are very expensive, and I think performance sensitive scenarios should use a pool per thread. Also, the interlocked synchronization is not much faster than locks but it causes some buffers to be dropped on the heap.

@KrzysztofCwalina KrzysztofCwalina self-assigned this Mar 27, 2015
@davidfowl davidfowl changed the title Simplity BufferPool Simplify BufferPool May 17, 2015
@VSadov
Copy link
Member

VSadov commented Oct 9, 2015

Our analysis of ObjectPool in Roslyn has shown that the most common scenario is a burst of ferocious borrow/return activity from the same, but random thread.

That could be explained by the fact that the most benefit from pooling comes when objects are used for very short duration. Objects that are taken from the pool for long times or permanently do not benefit as much from pooling. So, as a result of this, various threads transition into mode of heavy pool use in and out, but rarely at the same time.

Not sure if that translates to the scenarios for BufferPool. I think it might be likely.

Because of the above, in Roslyn we can assume (and see in profiles), that

  1. pool-per-thread would be expensive and not necessary, since there could be hundreds of threads using the pool, but rarely at the same exact time.
  2. InterlockExchange in the pool is actually fairly cheap since typically there is no cross-thread sharing.
  3. Occasionally dropping an object out of the pool is ok, since this is extremely infrequent (due to sharing being rare and contention even less).

I had some limited experiments with striped pool - ie have 4 sub-pools and pick one based on the current threadID to reduce chances of sharing. The additional complication did not pay for itself since we did not have enough contention in the first place.
In a more general solution, contention might be an occasional hazard and I would consider dynamic striping - start with one pool array, and once you detect too many collisions, add another stripe dynamically, and so on, until reaching some reasonable limit of stripes.

Another observation is that there are lots of scenarios (like scratch buffer or a builder) when a task needs briefly just one object at a time, so many requests could be satisfied with just one element of the cache.
Based on that we have a microoptimization of having one singleton object separate from the rest of the pool.

Perhaps this info would be useful.

@VSadov
Copy link
Member

VSadov commented Oct 9, 2015

Our object pool is basically a Get/Free API wrapped around a single object + a fixed size array.
It started much more complex, but after a number of refinements ended up as simple as that.

@codespare
Copy link

Regarding ManagedBufferPool specifically, http archive.org interesting stats is commented as the justification for the 2048000 bytes max buffer size default in System/Buffers/ManagedBufferPool.cs.
That is an informative page, thank you for having pointed it out. I noticed it is updated fortnightly and the Average Bytes per Page is > 2MB as of today (all those Christmas banners maybe?:).
Also shouldn't 2MB be 2097152 rather than 2048000 bytes? (2^20 instead of 2^10*1000)

@KrzysztofCwalina
Copy link
Member Author

@sokket, I think this issue is resolved. If so, please close it.

But please do continue the discussion about the max size of pooled buffers. It's important that we get it right. Having said that, I looked at the data and it seems like the total size of all assets is approaching 2MB per page. But I am not sure this is relevant to this discussion. Separate assets will be sent/received using separate buffers, correct?

@jonmill
Copy link
Contributor

jonmill commented Jan 12, 2016

I agree, I think this issue can be closed.

@codespare if you think the default buffer size is incorrect, please open an issue and tag me so we can have a discussion about it there and have an action item to tag any PR with to more easily track why the change would occur :)

@jonmill jonmill closed this as completed Jan 12, 2016
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants