Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: cmd/compile: define register-based calling convention #18597

Closed
dr2chase opened this issue Jan 10, 2017 · 73 comments
Closed

proposal: cmd/compile: define register-based calling convention #18597

dr2chase opened this issue Jan 10, 2017 · 73 comments

Comments

@dr2chase
Copy link
Contributor

@dr2chase dr2chase commented Jan 10, 2017

Performance would be generally somewhat improved if arguments-to and results-from function and method calls used registers instead of the stack. Projected improvements based on a limited prototype are in the range of 5-10%.

The running CL for the prototype: https://go-review.googlesource.com/c/28832/
The prototype, because it is a prototype, uses a pragma that should be unnecessary in the final design, though helpful during development.

(This is a placeholder bug for a design doc.)

problems/tradeoffs noted below, through 2017-01-19 (#18597 (comment))

This will reduce the usefulness of panic tracebacks because it will confuse/corrupt the per-frame argument information there. This was already a problem with SSA and register allocation and spilling, this will make it worse. Perhaps only results should be returned in registers.

Does this provide enough performance boost to justify "breaking" (how much?) existing assembly language?

Given that this breaks existing assembly language, why aren't we also doing a more thorough revision of the calling conventions to include callee-saves registers?

This should use the per-platform ABIs, to make interaction with native code go more smoothly.

Because this preserves the same memory layout of the existing calling, it will consume more stack space than it otherwise might, if that had been optimized out.

And the responses to above, roughly:

In compiler work, 5% (as a geometric mean across benchmarks) is a big deal.

Panic tracebacks are a problem; we will work on that. One possible mitigation is to use DWARF information to make tracebacks much better in general, including properly named and interpreted primitive values. A simpler mitigation for targeted debugging could be an annotation indicating that a function should be compiled to store arguments back to the stack (old style) to ensure that particular function's frame was adequately informative. This is also going to be an issue for improved inlining because regarded at a low level intermediate frames will disappear from backtraces.

The scope of required assembly-language changes is expected to be quite small; from Go-side function declarations the compiler ought to be able to create shims for the assembly to use around function entry, function exit, and surrounding assembly-language CALLs to named functions. The remaining case is assembly-language CALL via a pointer, and these are rare, especially outside the runtime. Thus, the need to bundle changes because they are disruptive is reduced, because the changes aren't that disruptive.

Incorporating callee-saves registers introduces a garbage-collector interaction that is not necessarily insurmountable, but other garbage-collected languages (e.g., Haskell) have been sufficiently intimidated by it that they elected not to use callee-saves registers. Because the assembler can also modify stack frames to include callee-save spill areas and introduce entry/exit shims to save/restore callee-save registers, this appears to have lower impact than initially estimated, and thus we have reduced need to bundle changes. In addition, if we stake out potential callee-save registers early developers can look ahead and adapt their code before it is required.

If each platform's ABI were used instead of this adaptation of the existing calling conventions, the assembly-language impact would be larger, the garbage-collector interactions would be larger, and either best-treatment for Go's multivalue returns would suffer or the cgo story would be sprinkled with asterisks and gotchas. As an alternative (a different way of obtaining a better story for cgo), would we consider a special annotation for cgo-related functions to indicate that exactly the platform calling conventions were used, with no compromises for Go performance?

@dr2chase dr2chase added this to the Go1.9Maybe milestone Jan 10, 2017
@dr2chase dr2chase self-assigned this Jan 10, 2017
@dsnet dsnet added this to the Proposal milestone Jan 10, 2017
@dsnet dsnet removed this from the Go1.9Maybe milestone Jan 10, 2017
@gopherbot
Copy link

@gopherbot gopherbot commented Jan 10, 2017

CL https://golang.org/cl/35054 mentions this issue.

Loading

@davecheney
Copy link
Contributor

@davecheney davecheney commented Jan 11, 2017

5% for amd64 doesn't feel like it justifies the cost of breaking all the assembly written to date.

Loading

@minux
Copy link
Member

@minux minux commented Jan 11, 2017

Loading

@davecheney
Copy link
Contributor

@davecheney davecheney commented Jan 11, 2017

I agree with Minux, optimising function calls for 10% speedup vs inlining and all the code generation benefits that unlocks doesn't feel like a good investment.

Loading

@josharian
Copy link
Contributor

@josharian josharian commented Jan 11, 2017

I'm still thinking about the proposal, but I will note that for asm functions with go prototypes and normal stack usage, we can do automated rewrites. Which is also a reminder that the scope for this proposal should include updating vet's asmdecl check.

Loading

@minux
Copy link
Member

@minux minux commented Jan 11, 2017

Loading

@randall77
Copy link
Contributor

@randall77 randall77 commented Jan 11, 2017

@minux Yes, we will have all the information necessary to print args as we do now. I suspect it just isn't implemented yet.

Loading

@minux
Copy link
Member

@minux minux commented Jan 11, 2017

Loading

@randall77
Copy link
Contributor

@randall77 randall77 commented Jan 11, 2017

The values enter in registers but they will be spilled to the arg slots if they are live at any call. So all live arguments will still be correct. Even modified live arguments should work (except for a few weird corner cases where multiple values derived from the same input are live).

Dead arguments are not correct even today. What you see in tracebacks is stale. They may get more wrong with this implementation, so it may be worth marking/not displaying dead args.

Things will get trickier if we allow callee-saved registers. We're thinking about it, but it isn't in the design doc yet.

Loading

@minux
Copy link
Member

@minux minux commented Jan 11, 2017

Loading

@randall77
Copy link
Contributor

@randall77 randall77 commented Jan 11, 2017

When we pass an argument in a register, the corresponding stack slot is reserved but not initialized, so there's no memory traffic for it. Only if the callee needs to spill it will that slot actually get initialized.

Loading

@dr2chase
Copy link
Contributor Author

@dr2chase dr2chase commented Jan 11, 2017

I'm not sure how "full" the design docs are.
I've got a CL up, and here's a more readable version of it that I'll try to keep up to date: https://gist.github.com/dr2chase/5a1107998024c76de22e122ed836562d

(Repeating some of that gist) I reviewed a bunch of ABIs, and the combination of

  • need someplace to spill if the stack must grow, and we have no frame yet
  • callee-saves makes tracing stacks for GC a good deal more painful; the impact on assembly language is also a good deal larger.
  • there's no memory traffic unless there's a spill, and then we must spill somewhere
  • most of the existing ABIs (all but Arm64) are unfriendly to multiple return values in registers
  • assurances from debugger/dump-inspection people (Delve and Backtrace.io) that they use DWARF, if we take the time to get it right

caused me to to decide to try something other than standard ABI(s). The new goal was to minimize change/cost while still getting plausible benefits, so I decided to keep as much of the existing design as possible.

The main loss in reserving stack space is increased stack size; we only spill if we must, but if we must, we spill to the "old" location.

As far as backtraces go, we need to improve our DWARF output for debuggers, and I think we will, and then use that to produce readable backtraces. That should get this as right as it can be (up to variables not being live) and would be more generally accessible than binary data.

So actually turning this on may be gated by DWARF improvements.

Loading

@minux
Copy link
Member

@minux minux commented Jan 11, 2017

Loading

@bcmills
Copy link
Member

@bcmills bcmills commented Jan 12, 2017

if we can align our callee-saved registers with the platform ABI, cgo callbacks will be faster

Not to mention more reliable. A lot of the signal trampolines in the runtime could be a whole lot simpler if they didn't have to deal with the ABI mismatch between signal handlers (which must use the platform ABI) and the runtime's Go functions. (I'm still not entirely convinced that there aren't more calling-convention bugs lurking.)

Loading

@minux
Copy link
Member

@minux minux commented Jan 12, 2017

Loading

@ianlancetaylor
Copy link
Contributor

@ianlancetaylor ianlancetaylor commented Jan 12, 2017

At each call, have the PCDATA for that call record, for each callee-saved register, whether it holds 1) a pointer; 2) a non-pointer; 3) whatever it held on function entry. Also at each call, record the list of callee-saved registers saved from the caller, and where they are.

Now, on stack unwind, you can see the list of callee-saved registers. Then you look to the caller to see whether the value is a pointer or not. Of course you may have to look at the caller's caller, etc., but since the number of callee-saved registers is small and fixed it's easy to keep track of what you need to do as you walk up the stack.

Loading

@philhofer
Copy link
Contributor

@philhofer philhofer commented Jan 12, 2017

5% for amd64 doesn't feel like it justifies the cost of breaking all the assembly written to date.

Much like SSA, I expect this change will have a much more meaningful effect on minority architectures.

Those fancy Intel chips can resolve push/pop pairs halfway through the instruction pipeline (without even emitting a uop!), while a load from the stack on a Cortex A57 has a 4-cycle result latency..

Loading

@minux
Copy link
Member

@minux minux commented Jan 12, 2017

Loading

@philhofer
Copy link
Contributor

@philhofer philhofer commented Jan 12, 2017

Result latency doesn't matter much on a out-of-order core because L1D on
Intel
chips has similar latency.

It especially doesn't matter if you have shadow registers, but AFAIK none of the arm/arm64 designs have 'em.

I suspect the benefit could be higher, but for non-leaf functions we still
need to
spill the argument registers to stack, which negates most of the benefit
for large
and non-leaf functions.

You can't just move them to callee-saves if they're still live? Is that for GC, or a requirement for traceback?

Loading

@minux
Copy link
Member

@minux minux commented Jan 12, 2017

Loading

@philhofer
Copy link
Contributor

@philhofer philhofer commented Jan 12, 2017

In the current ABI, every register is caller-save. My counterproposal is
introducing callee-save
registers but keep arguments passing on stack to preserve the current
traceback behavior.

Ah. I guess I assumed, incorrectly, that this proposal included making some registers callee-save.

Loading

@cherrymui
Copy link
Contributor

@cherrymui cherrymui commented Jan 12, 2017

Therefore, it seems registered parameters will mostly help small leaf
functions. And if that's true, I imagine inlining those functions will
actually provide even more speedup because then the compiler can optimize
across function call boundaries.

Inlining doesn't help on function pointer calls.

Loading

@bcmills
Copy link
Member

@bcmills bcmills commented Jan 12, 2017

@dr2chase

there's no memory traffic unless there's a spill, and then we must spill somewhere

That's not strictly true. Empty stack slots still decrease cache locality by forcing the actual in-use stack across a larger number of cache lines. There's no extra memory traffic between the CPU and cache unless there's a spill, but reserving stack slots may well increase memory traffic on the bus.

most of the existing ABIs (all but Arm64) are unfriendly to multiple return values in registers

Not true. It would be trivial to extend the SysV AMD64 ABI for multiple return values, for example: it already has two return registers (rax and rdx), and it's easy enough to define extensions of the ABI that pass additional Go return values in other caller-save registers.

For example, we could do something like:

rax: varargs count; 1st return
rbx: callee-saved
rcx: 4th argument; 3rd return
rdx: 3rd argument; 2nd return
rsp: stack pointer
rbp: frame pointer
rsi: 2nd argument
rdi: 1st argument
r8: 5th argument; 4th return
r9: 6th argument; 5th return
r11: temporary register
r12-r14: callee-saved
r15: GOT base pointer

Loading

@mundaym
Copy link
Member

@mundaym mundaym commented Jan 13, 2017

Am I understanding correctly that this proposal is calling for structs to always be unpacked into registers before a call? Has any thought been given to passing structs as read-only references? I think this is how most (all?) of the ELF ABIs handle structs, particularly larger ones.

This way the callee gets to decide whether it needs to create a local copy of anything, avoiding copying in many cases. It is also presumably fairly common for only a subset of struct fields to be accessed so unpacking all or part of the struct may be unecessarily expensive (particularly if the struct has boolean fields). Obviously the reference would have to be to something on the stack (copied there, if necessary, by the caller) or to read-only memory.

For Go in particular it seems like this would be a nice fit because the only way to get const-like behaviour is to pass structs by value and internally passing them by reference instead would potentially make that idiom a lot cheaper.

Arrays and maybe slices could also be handled the same way.

Loading

@minux
Copy link
Member

@minux minux commented Jan 13, 2017

Loading

@minux
Copy link
Member

@minux minux commented Jan 19, 2017

Loading

@randall77
Copy link
Contributor

@randall77 randall77 commented Jan 19, 2017

That the proposals conflict is ok. And we can discuss those conflicts here. But without a concrete proposal about how you'd like to make tracebacks better it's hard to see exactly what those conflicts would be.

Loading

@rsc rsc changed the title proposal: function and method arguments and results should be passed in registers proposal: cmd/compile: define register-based calling convention Jan 23, 2017
@aarzilli
Copy link
Contributor

@aarzilli aarzilli commented Feb 3, 2017

I'm interested in knowing how this would interact with non-optimized compilation. Right now non-optimized compilations registerize sparingly, which is great for debuggers since the go compiler isn't good at saving informations about registerization. Putting a lot more things into registers without getting better at writing the appropriate debug symbols would be bad.

Loading

@bcmills
Copy link
Member

@bcmills bcmills commented Feb 3, 2017

I'm interested in knowing how this would interact with non-optimized compilation. Right now non-optimized compilations registerize sparingly, which is great for debuggers since the go compiler isn't good at saving informations about registerization.

Presumably part of the changes for the calling convention would be making the compiler emit more accurate DWARF info for register parameters.

Loading

@rsc
Copy link
Contributor

@rsc rsc commented Apr 3, 2017

On hold until @dr2chase is ready to proceed.

Loading

@dr2chase
Copy link
Contributor Author

@dr2chase dr2chase commented Apr 3, 2017

Current plan is 1.10, time got reallocated to loop unrolling and whatever help was required to get the Dwarf support better in general, so that we have the ability to say what we're doing with parameters.

There's been a lot of churn in the compiler in the last couple of months -- new source position, pushing the AST through, making things more ready for running in parallel, and moving liveness into the SSA form -- so I am okay with waiting a release.

Loading

@smasher164
Copy link
Member

@smasher164 smasher164 commented Oct 31, 2017

Would the proposed calling convention omit frame pointers for leaf functions? I can see this varying based on whether the function stores the state of a callee-save register, takes the address of a local variable, etc. In that case, is there still a feasible way to obtain the call stack?

Loading

@wkhere
Copy link

@wkhere wkhere commented Jan 6, 2018

A naive proposal:

maybe a good optimization would be, instead of changing calling convention, to focus on eliminating local variables on stack by using registers, and sharing as much as possible the stack frame space between variables and arguments/return values of subcalls.

This way (1) registers would speed up things, (2) stack frames would be smaller, (3) existing calling convention would be preserved.

Loading

@davecheney
Copy link
Contributor

@davecheney davecheney commented Jan 6, 2018

Loading

@quasilyte
Copy link
Contributor

@quasilyte quasilyte commented Feb 7, 2018

I can be wrong, but register-based calling convention (CC) could give more performance boost if additional SSA rules are added that actually take advantage of it.

Currently, a big amount of small and average functions have many mem->reg->mem patterns that make certain optimizations on amd64 not worth it.

It is not fair to say that 10% is not significant.
It's 10% with optimizer that works on GOARCH=amd64 like it is GOARCH=386.
The potential gain is higher.

Lately, I was comparing 6g with gccgo and even without machine-dependent optimizations, aggressive inlining and constant folding there was about 25-40% performance difference in some allocation-free benchmarks. I do believe that this difference includes CC impact (because most other parts of output machine code look nearly the same).

Loading

@laboger
Copy link
Contributor

@laboger laboger commented Feb 8, 2018

As noted above, the performance benefit for a register-based calling convention is higher on RISC architectures like ppc64le & ppc64. If the above CL is not stale I would be willing to try it out. To me this is one of the biggest performance issues with golang on ppc64le.

Inlining does help but doesn't handle all cases if there are conditions that inhibit inlining. Also functions written in asm can't be inlined AFAIK.

Loading

@dr2chase
Copy link
Contributor Author

@dr2chase dr2chase commented Feb 8, 2018

The CL is very stale. First we have to get to a good place with debugger information (seems very likely for 1.10 [oops, 1.11]), and then we have to redo some of the experiment (it will go more quickly this time) with a better set of benchmarks. Some optimizations that looked promising on the go1 benchmarks turn out to look considerably less profitable when run against a wider set of benchmarks.

Loading

@as
Copy link
Contributor

@as as commented Feb 9, 2018

@dr2chase

Is the method used to determine the 5% performance improvement somewhere in the discussion? I'm curiously interested in what the function sample space actually looks like.

Loading

@laboger
Copy link
Contributor

@laboger laboger commented Feb 9, 2018

I think the measurement we want is how much does this help once inlining is fully enabled by default.

Loading

@dr2chase
Copy link
Contributor Author

@dr2chase dr2chase commented Feb 9, 2018

The estimate was based on counting call frequencies, looking at performance improvement on a small number of popular calls "registerized", and extrapolating that to the full number of calls.

I agree that we want to try better inlining first (it's in the queue for 1.11, now that we have that debugging information fixed), since that will cut the call count (and thus reduce the benefit) of this somewhat more complicated optimization. Mid-stack inlining, however, was one of the optimizations that looked a lot less good when applied to the larger suite of benchmarks. One problem with the larger benchmark suite is selection effect -- anyone who wrote a benchmark for parts of their application cares enough about a performance to write that benchmark, and has probably used it already to hand-optimize around rough spots in the Go implementation, so we'll see reduced gains from optimizing those rough spots.

Loading

@CAFxX
Copy link
Contributor

@CAFxX CAFxX commented Feb 10, 2018

and has probably used it already to hand-optimize around rough spots in the Go implementation

From my point of view, the goal of having a better inliner is to have more readable/maintainable (less "hand-optimized") code, not really faster code. So that we can build code addressing the problems we're trying to solve, not the shortcomings of the compiler.

Loading

@ianlancetaylor
Copy link
Contributor

@ianlancetaylor ianlancetaylor commented Aug 12, 2020

There is an updated proposal at #40724, which addresses many of the issues raised here. Closing this proposal in favor of that one.

Loading

@golang golang locked and limited conversation to collaborators Nov 13, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet