Closed
Description
Many Go programs and packages try to reuse memory either for locality reasons or to reduce GC pressure: pkg/regexp: pool of available threads pkg/net/http: wants to reuse a lot and has TODOs, but has resisted the temptation so far pkg/fmt: print.go's var ppFree = newCache(func() interface{} { return new(pp) }) pkg/io: for the 32 KB copy buffers; see discussion at https://golang.org/cl/7206048/ (which spawned this bug). These buffers showed up in dl.google.com profiles. Lot of things in the Go standard library use io.Copy, so this can't be fixed by caller code. pkg/io/ioutil: see https://code.google.com/p/go/source/browse/src/pkg/io/ioutil/blackhole.go for reusing the Discard buffers dl.google.com: was allocating hundreds of MB/s, causing lots of GCs, until we added a google-internal (for now) []byte pool reuse library, with an API like: package pool func NewBytePool(...opts...) *BytePool func (*BytePool) Alloc(n int) []byte func (*BytePool) Free(buf []byte) There are two distinct but related uses: * reusing small-ish structs (like regexp, fmt) * reusing bigger []byte (io, ioutil, dl.google.com) The former would benefit more from a per-m cache. Dmitry and Russ had some thoughts towards this. With big []byte, per-m doesn't matter as much, but you do care about things not being able to burst to some threshold (yet not retain memory for too long unnecessarily) and grouping things into size classes (i.e. a "user-space" tcmalloc) when the sizes differ. The question: Do we make a new package or facility to promote this pattern? The status quo is that it keeps being reimplemented in each package privately, and poorly. It'd be nice to have a good implementation that everybody could reuse. Almost all uses are beyond what I believe are reasonable with static liveness/escape analysis. This isn't a GC problem, because by the time the GC's involved, it's too late and we've already allocated too much. This is about allocating less and reusing memory when caller code knows it's no longer needed.