Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cmd/compile: storing pointer of a local value escapes #13493

dsnet opened this issue Dec 5, 2015 · 6 comments

cmd/compile: storing pointer of a local value escapes #13493

dsnet opened this issue Dec 5, 2015 · 6 comments


Copy link

@dsnet dsnet commented Dec 5, 2015

Using go.1.5.1

Consider the following program:

package main

func foo() {
    type node struct{ n0, n1 *node }
    var nodeCache [1024]node
    var nodePtrs [1024]*node
    nodes := nodePtrs[:0]

    n := &nodeCache[0]
    n.n0 = &nodeCache[1]
    n.n1 = &nodeCache[2]

    nodes = append(nodes, n)

func main() {

Currently, the compiler thinks that nodeCache escapes:

joetsai@zynx: ~ $ go build -gcflags=-m main11.go 
# command-line-arguments
./main11.go:16: can inline main
./main11.go:5: moved to heap: nodeCache
./main11.go:10: &nodeCache[1] escapes to heap
./main11.go:11: &nodeCache[2] escapes to heap
./main11.go:9: &nodeCache[0] escapes to heap
./main11.go:7: foo nodePtrs does not escape

The compiler should be able to track that pointers to nodeCache leak to nodePtrs and since nodePtrs never escapes, nodeCache never escapes as well.

This would be really useful for operating on trees and graphs (that are bounded in size) without any heap allocations.

Copy link

@randall77 randall77 commented Dec 5, 2015


Copy link

@dr2chase dr2chase commented Dec 7, 2015

I think this requires tracking the size of nodes (as a slice) to know that the append won't require a heap (re)allocation, so this is not likely to happen especially soon. I can imagine optimistically performing SSA before escape analysis and doing escape analysis there (along with constant propagation and bounds check elimination) but what you see here is the sum total of the thought that's been put into it.


Copy link

@rsc rsc commented Dec 28, 2015

It's not just the append. There are cycles here that in general are very difficult to represent precisely in the analysis. I don't see this happening any time soon.


Copy link

@gopherbot gopherbot commented Mar 21, 2016

CL mentions this issue.


gopherbot pushed a commit that referenced this issue Mar 22, 2016
Escape analysis has a hard time with tree-like
structures (see #13493 and #14858).
This is unlikely to change.
As a result, when invoking a function that accepts
a **Node parameter, we usually allocate a *Node
on the heap. This happens a whole lot.

This CL changes functions from taking a **Node
to acting more like append: It both modifies
the input and returns a replacement for it.

Because of the cascading nature of escape analysis,
in order to get the benefits, I had to modify
almost all such functions. The remaining functions
are in racewalk and the backend. I would be happy
to update them as well in a separate CL.

This CL was created by manually updating the
function signatures and the directly impacted
bits of code. The callsites were then automatically
updated using a bespoke script:

For ease of reviewing and future understanding,
this CL is also broken down into four CLs,
mailed separately, which show the manual
and the automated changes separately.
They are CLs 20990, 20991, 20992, and 20993.

Passes toolstash -cmp.

name       old time/op     new time/op     delta
Template       335ms ± 5%      324ms ± 5%   -3.35%        (p=0.000 n=23+24)
Unicode        176ms ± 9%      165ms ± 6%   -6.12%        (p=0.000 n=23+24)
GoTypes        1.10s ± 4%      1.07s ± 2%   -2.77%        (p=0.000 n=24+24)
Compiler       5.31s ± 3%      5.15s ± 3%   -2.95%        (p=0.000 n=24+24)
MakeBash       41.6s ± 1%      41.7s ± 2%     ~           (p=0.586 n=23+23)

name       old alloc/op    new alloc/op    delta
Template      63.3MB ± 0%     62.4MB ± 0%   -1.36%        (p=0.000 n=25+23)
Unicode       42.4MB ± 0%     41.6MB ± 0%   -1.99%        (p=0.000 n=24+25)
GoTypes        220MB ± 0%      217MB ± 0%   -1.11%        (p=0.000 n=25+25)
Compiler       994MB ± 0%      973MB ± 0%   -2.08%        (p=0.000 n=24+25)

name       old allocs/op   new allocs/op   delta
Template        681k ± 0%       574k ± 0%  -15.71%        (p=0.000 n=24+25)
Unicode         518k ± 0%       413k ± 0%  -20.34%        (p=0.000 n=25+24)
GoTypes        2.08M ± 0%      1.78M ± 0%  -14.62%        (p=0.000 n=25+25)
Compiler       9.26M ± 0%      7.64M ± 0%  -17.48%        (p=0.000 n=25+25)

name       old text-bytes  new text-bytes  delta
HelloSize       578k ± 0%       578k ± 0%     ~     (all samples are equal)
CmdGoSize      6.46M ± 0%      6.46M ± 0%     ~     (all samples are equal)

name       old data-bytes  new data-bytes  delta
HelloSize       128k ± 0%       128k ± 0%     ~     (all samples are equal)
CmdGoSize       281k ± 0%       281k ± 0%     ~     (all samples are equal)

name       old exe-bytes   new exe-bytes   delta
HelloSize       921k ± 0%       921k ± 0%     ~     (all samples are equal)
CmdGoSize      9.86M ± 0%      9.86M ± 0%     ~     (all samples are equal)

Change-Id: I277d95bd56d51c166ef7f560647aeaa092f3f475
Reviewed-by: Dave Cheney <>
Reviewed-by: Ian Lance Taylor <>
Copy link

@cespare cespare commented Apr 5, 2017

I think the cycle issue here is the same as #7921 in which bytes.Buffer is heap-allocated if you call the WriteXxx methods.


Copy link

@mdempsky mdempsky commented Apr 16, 2019

@cespare I think this issue is separate from (and trickier than) #7921.

A simpler test case demonstrating this issue is:

var s [1]*int
_ = append(s[:0], new(int))

Currently new(int) escapes to heap because we pessimistically assume append might need to allocate a new slice on the heap and copy the arguments there. To fix this issue, escape analysis would need to be able to do range analysis to know we can do an in-place append.

I expect the long-term way to address this is to defer running escape analysis until after SSA form.


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
7 participants