-
Notifications
You must be signed in to change notification settings - Fork 17.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sync: add Pool type #4720
Labels
Comments
I would like this very much. In one project at CloudFlare I have implemented some code to recycle []byte buffers which are used to hold web pages. These were being allocated and released to the GC for cleanup. We now have a small package that accepts them back on a channel and maintains a list in a goroutine of available buffers. Anyone needing a buffer just receives on a channel and gets the next item from the list or a new buffer if none are available. The code also keeps track of the size of the list discarding buffers if it looks like there are 'too many'. The idea being that we don't keep buffers that were allocated during a burst of activity. |
What you are proposing is essentially to provide free() function. That's the easiest and the most efficient way to implement your interface :) Another option is to provide generic pool for interface{}: type Pool struct {...} func (p *Pool) Get() interface{} func (p *Pool) Put(res interface{}) bool func (p *Pool) Drain() []interface{} func (p *Pool) SetCapacity(global, local int) And then if the buffer sizes have some reasonable limits, then one can have, say, 16 pools for blocks of size 4k, 8k, 12k, ..., 64k, and manually choose the right one. Brad, would it suffice dl.google.com server? The idea behind SetCapacity(global, local int) is that you can set local=0, and then it will be a slower pool for expensive resources. Yes, the fast version with local cache would benefit greatly from per-m caches. |
> And then if the buffer sizes have some reasonable limits, then one can have, > say, 16 pools for blocks of size 4k, 8k, 12k, ..., 64k, and manually choose > the right one. Brad, would it suffice dl.google.com server? I think asking for and freeing n bytes is common enough that it should be first-class. We shouldn't make people create their own set of ever size class []byte. |
The problem with runtime.Free is that using it incorrectly from buggy package A affects not-buggy package B. With my NewBytePool proposal as implemented for dl.google.com, the byte pools are owned by whoever made them. But for everything needing []byte that I care about right now (io Copy buffers, ioutil Discard buffers, and dl.google.com buffers), they're all fixed size. So I'm fine ignoring variably-sized []byte allocation pools and not muddling this discussion. Pools of interface{} works for me. |
This bug is about sync.Cache, a cache of interchangeable fixed-size items. The idea is that a sync.Cache might replace fmt's internal cache, and it might be used for a cache of fixed-size buffers for io.Copy. A cache of non-fixed-size items such as variable-length byte slices is quite a bit more complicated. |
I have a preliminary version of sync.Cache based on proc-local data: https://golang.org/cl/7686043/ And here it is applied to fmt: https://golang.org/cl/7637047 benchmark old ns/op new ns/op delta BenchmarkSprintfInt 363 368 +1.38% BenchmarkSprintfInt-2 457 249 -45.51% BenchmarkSprintfInt-4 850 142 -83.29% BenchmarkSprintfInt-8 1220 78 -93.61% BenchmarkSprintfInt-16 1226 58 -95.22% BenchmarkManyArgs 2233 2296 +2.82% BenchmarkManyArgs-2 1255 1316 +4.86% BenchmarkManyArgs-4 1250 714 -42.88% BenchmarkManyArgs-8 1100 429 -61.00% BenchmarkManyArgs-16 1528 361 -76.37% |
Pretty please? :) The runtime part looks relatively tiny: https://golang.org/cl/7686043/diff/6001/src/pkg/runtime/proc.c |
I'm concerned that sync.Cache is exported and will let a genie out of the bottle we will have to support for all of Go 1.x. What if, for Go 1.1, in only the fmt and net/http packages (or net/textproto), a copy of the sync.Cache was added, in a separate file, without exporting the type? Dave |
Comment 18 by tylor@torbit.com: Please call this construct a Pool (or FixedPool) rather than a Cache if it comes into existence. I can already imagine myself explaining to new people over and over again on #go-nuts that sync.Cache is actually a pool, etc etc... |
As a user of go to write backend services, this is the one feature I was truly looking forward to for 1.2. All the ad-hoc leaky buckets I'm making are nothing but boilerplate, and I've love to be able to use something like sync.Pool. Is premature optimization a problem? Yes, but for my services, stop-the-world GC pauses are worse, and I can't profile ahead of time for every combination of real world scenarios. The obvious places that need a pool will get them regardless. Please reconsider. A naive sync.Pool helps me get products to production faster. A GC-aware sync.Pool would be a godsend. |
API proposal from Russ: https://groups.google.com/forum/#!searchin/golang-dev/gc-aware/golang-dev/kJ_R6vYVYHU/LjoGriFTYxMJ |
CL proposal: https://golang.org/cl/41860043/ |
This issue was updated by revision 8c6ef06. R=golang-dev, rsc, cshapiro, iant, r, dvyukov, khr CC=golang-dev https://golang.org/cl/41860043 |
This issue was updated by revision 0f93118. R=golang-dev, iant CC=golang-dev https://golang.org/cl/43990043 |
This issue was updated by revision 46b4ed2. R=golang-dev, iant CC=golang-dev https://golang.org/cl/37720047 |
This issue was updated by revision 93e4a9d. R=golang-dev, iant CC=golang-dev https://golang.org/cl/44080043 |
This issue was updated by revision b682f6d. R=golang-dev, rsc CC=golang-dev https://golang.org/cl/44150043 |
This issue was updated by revision 90e9669. R=golang-dev, rsc CC=golang-dev https://golang.org/cl/44150043 »»» TBR=golang-dev CC=golang-codereviews https://golang.org/cl/48190043 |
CL https://golang.org/cl/8202 mentions this issue. |
dr2chase
added a commit
that referenced
this issue
May 1, 2015
This includes the following information in the per-function summary: outK = paramJ encoded in outK bits for paramJ outK = *paramJ encoded in outK bits for paramJ heap = paramJ EscHeap heap = *paramJ EscContentEscapes Note that (currently) if the address of a parameter is taken and returned, necessarily a heap allocation occurred to contain that reference, and the heap can never refer to stack, therefore the parameter and everything downstream from it escapes to the heap. The per-function summary information now has a tuneable number of bits (2 is probably noticeably better than 1, 3 is likely overkill, but it is now easy to check and the -m debugging output includes information that allows you to figure out if more would be better.) A new test was added to check pointer flow through struct-typed and *struct-typed parameters and returns; some of these are sensitive to the number of summary bits, and ought to yield better results with a more competent escape analysis algorithm. Another new test checks (some) correctness with array parameters, results, and operations. The old analysis inferred a piece of plan9 runtime was non-escaping by counteracting overconservative analysis with buggy analysis; with the bug fixed, the result was too conservative (and it's not easy to fix in this framework) so the source code was tweaked to get the desired result. A test was added against the discovered bug. The escape analysis was further improved splitting the "level" into 3 parts, one tracking the conventional "level" and the other two computing the highest-level-suffix-from-copy, which is used to generally model the cancelling effect of indirection applied to address-of. With the improved escape analysis enabled, it was necessary to modify one of the runtime tests because it now attempts to allocate too much on the (small, fixed-size) G0 (system) stack and this failed the test. Compiling src/std after touching src/runtime/*.go with -m logging turned on shows 420 fewer heap allocation sites (10538 vs 10968). Profiling allocations in src/html/template with for i in {1..5} ; do go tool 6g -memprofile=mastx.${i}.prof -memprofilerate=1 *.go; go tool pprof -alloc_objects -text mastx.${i}.prof ; done showed a 15% reduction in allocations performed by the compiler. Update #3753 Update #4720 Fixes #10466 Change-Id: I0fd97d5f5ac527b45f49e2218d158a6e89951432 Reviewed-on: https://go-review.googlesource.com/8202 Run-TryBot: David Chase <drchase@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Russ Cox <rsc@golang.org>
This issue was closed.
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
The text was updated successfully, but these errors were encountered: