New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: cmd/compile: define register-based calling convention #18597

Open
dr2chase opened this Issue Jan 10, 2017 · 72 comments

Comments

Projects
None yet
@dr2chase
Contributor

dr2chase commented Jan 10, 2017

Performance would be generally somewhat improved if arguments-to and results-from function and method calls used registers instead of the stack. Projected improvements based on a limited prototype are in the range of 5-10%.

The running CL for the prototype: https://go-review.googlesource.com/c/28832/
The prototype, because it is a prototype, uses a pragma that should be unnecessary in the final design, though helpful during development.

(This is a placeholder bug for a design doc.)

problems/tradeoffs noted below, through 2017-01-19 (#18597 (comment))

This will reduce the usefulness of panic tracebacks because it will confuse/corrupt the per-frame argument information there. This was already a problem with SSA and register allocation and spilling, this will make it worse. Perhaps only results should be returned in registers.

Does this provide enough performance boost to justify "breaking" (how much?) existing assembly language?

Given that this breaks existing assembly language, why aren't we also doing a more thorough revision of the calling conventions to include callee-saves registers?

This should use the per-platform ABIs, to make interaction with native code go more smoothly.

Because this preserves the same memory layout of the existing calling, it will consume more stack space than it otherwise might, if that had been optimized out.

And the responses to above, roughly:

In compiler work, 5% (as a geometric mean across benchmarks) is a big deal.

Panic tracebacks are a problem; we will work on that. One possible mitigation is to use DWARF information to make tracebacks much better in general, including properly named and interpreted primitive values. A simpler mitigation for targeted debugging could be an annotation indicating that a function should be compiled to store arguments back to the stack (old style) to ensure that particular function's frame was adequately informative. This is also going to be an issue for improved inlining because regarded at a low level intermediate frames will disappear from backtraces.

The scope of required assembly-language changes is expected to be quite small; from Go-side function declarations the compiler ought to be able to create shims for the assembly to use around function entry, function exit, and surrounding assembly-language CALLs to named functions. The remaining case is assembly-language CALL via a pointer, and these are rare, especially outside the runtime. Thus, the need to bundle changes because they are disruptive is reduced, because the changes aren't that disruptive.

Incorporating callee-saves registers introduces a garbage-collector interaction that is not necessarily insurmountable, but other garbage-collected languages (e.g., Haskell) have been sufficiently intimidated by it that they elected not to use callee-saves registers. Because the assembler can also modify stack frames to include callee-save spill areas and introduce entry/exit shims to save/restore callee-save registers, this appears to have lower impact than initially estimated, and thus we have reduced need to bundle changes. In addition, if we stake out potential callee-save registers early developers can look ahead and adapt their code before it is required.

If each platform's ABI were used instead of this adaptation of the existing calling conventions, the assembly-language impact would be larger, the garbage-collector interactions would be larger, and either best-treatment for Go's multivalue returns would suffer or the cgo story would be sprinkled with asterisks and gotchas. As an alternative (a different way of obtaining a better story for cgo), would we consider a special annotation for cgo-related functions to indicate that exactly the platform calling conventions were used, with no compromises for Go performance?

@dr2chase dr2chase added this to the Go1.9Maybe milestone Jan 10, 2017

@dr2chase dr2chase self-assigned this Jan 10, 2017

@cespare cespare added the Proposal label Jan 10, 2017

@dsnet dsnet modified the milestones: Proposal, Go1.9Maybe Jan 10, 2017

@gopherbot

This comment has been minimized.

gopherbot commented Jan 10, 2017

CL https://golang.org/cl/35054 mentions this issue.

@davecheney

This comment has been minimized.

Contributor

davecheney commented Jan 11, 2017

5% for amd64 doesn't feel like it justifies the cost of breaking all the assembly written to date.

@minux

This comment has been minimized.

Member

minux commented Jan 11, 2017

@davecheney

This comment has been minimized.

Contributor

davecheney commented Jan 11, 2017

I agree with Minux, optimising function calls for 10% speedup vs inlining and all the code generation benefits that unlocks doesn't feel like a good investment.

@josharian

This comment has been minimized.

Contributor

josharian commented Jan 11, 2017

I'm still thinking about the proposal, but I will note that for asm functions with go prototypes and normal stack usage, we can do automated rewrites. Which is also a reminder that the scope for this proposal should include updating vet's asmdecl check.

@minux

This comment has been minimized.

Member

minux commented Jan 11, 2017

@randall77

This comment has been minimized.

Contributor

randall77 commented Jan 11, 2017

@minux Yes, we will have all the information necessary to print args as we do now. I suspect it just isn't implemented yet.

@minux

This comment has been minimized.

Member

minux commented Jan 11, 2017

@randall77

This comment has been minimized.

Contributor

randall77 commented Jan 11, 2017

The values enter in registers but they will be spilled to the arg slots if they are live at any call. So all live arguments will still be correct. Even modified live arguments should work (except for a few weird corner cases where multiple values derived from the same input are live).

Dead arguments are not correct even today. What you see in tracebacks is stale. They may get more wrong with this implementation, so it may be worth marking/not displaying dead args.

Things will get trickier if we allow callee-saved registers. We're thinking about it, but it isn't in the design doc yet.

@minux

This comment has been minimized.

Member

minux commented Jan 11, 2017

@randall77

This comment has been minimized.

Contributor

randall77 commented Jan 11, 2017

When we pass an argument in a register, the corresponding stack slot is reserved but not initialized, so there's no memory traffic for it. Only if the callee needs to spill it will that slot actually get initialized.

@dr2chase

This comment has been minimized.

Contributor

dr2chase commented Jan 11, 2017

I'm not sure how "full" the design docs are.
I've got a CL up, and here's a more readable version of it that I'll try to keep up to date: https://gist.github.com/dr2chase/5a1107998024c76de22e122ed836562d

(Repeating some of that gist) I reviewed a bunch of ABIs, and the combination of

  • need someplace to spill if the stack must grow, and we have no frame yet
  • callee-saves makes tracing stacks for GC a good deal more painful; the impact on assembly language is also a good deal larger.
  • there's no memory traffic unless there's a spill, and then we must spill somewhere
  • most of the existing ABIs (all but Arm64) are unfriendly to multiple return values in registers
  • assurances from debugger/dump-inspection people (Delve and Backtrace.io) that they use DWARF, if we take the time to get it right

caused me to to decide to try something other than standard ABI(s). The new goal was to minimize change/cost while still getting plausible benefits, so I decided to keep as much of the existing design as possible.

The main loss in reserving stack space is increased stack size; we only spill if we must, but if we must, we spill to the "old" location.

As far as backtraces go, we need to improve our DWARF output for debuggers, and I think we will, and then use that to produce readable backtraces. That should get this as right as it can be (up to variables not being live) and would be more generally accessible than binary data.

So actually turning this on may be gated by DWARF improvements.

@minux

This comment has been minimized.

Member

minux commented Jan 11, 2017

@bcmills

This comment has been minimized.

Member

bcmills commented Jan 12, 2017

if we can align our callee-saved registers with the platform ABI, cgo callbacks will be faster

Not to mention more reliable. A lot of the signal trampolines in the runtime could be a whole lot simpler if they didn't have to deal with the ABI mismatch between signal handlers (which must use the platform ABI) and the runtime's Go functions. (I'm still not entirely convinced that there aren't more calling-convention bugs lurking.)

@minux

This comment has been minimized.

Member

minux commented Jan 12, 2017

@ianlancetaylor

This comment has been minimized.

Contributor

ianlancetaylor commented Jan 12, 2017

At each call, have the PCDATA for that call record, for each callee-saved register, whether it holds 1) a pointer; 2) a non-pointer; 3) whatever it held on function entry. Also at each call, record the list of callee-saved registers saved from the caller, and where they are.

Now, on stack unwind, you can see the list of callee-saved registers. Then you look to the caller to see whether the value is a pointer or not. Of course you may have to look at the caller's caller, etc., but since the number of callee-saved registers is small and fixed it's easy to keep track of what you need to do as you walk up the stack.

@philhofer

This comment has been minimized.

Contributor

philhofer commented Jan 12, 2017

5% for amd64 doesn't feel like it justifies the cost of breaking all the assembly written to date.

Much like SSA, I expect this change will have a much more meaningful effect on minority architectures.

Those fancy Intel chips can resolve push/pop pairs halfway through the instruction pipeline (without even emitting a uop!), while a load from the stack on a Cortex A57 has a 4-cycle result latency..

@minux

This comment has been minimized.

Member

minux commented Jan 12, 2017

@philhofer

This comment has been minimized.

Contributor

philhofer commented Jan 12, 2017

Result latency doesn't matter much on a out-of-order core because L1D on
Intel
chips has similar latency.

It especially doesn't matter if you have shadow registers, but AFAIK none of the arm/arm64 designs have 'em.

I suspect the benefit could be higher, but for non-leaf functions we still
need to
spill the argument registers to stack, which negates most of the benefit
for large
and non-leaf functions.

You can't just move them to callee-saves if they're still live? Is that for GC, or a requirement for traceback?

@minux

This comment has been minimized.

Member

minux commented Jan 12, 2017

@philhofer

This comment has been minimized.

Contributor

philhofer commented Jan 12, 2017

In the current ABI, every register is caller-save. My counterproposal is
introducing callee-save
registers but keep arguments passing on stack to preserve the current
traceback behavior.

Ah. I guess I assumed, incorrectly, that this proposal included making some registers callee-save.

@cherrymui

This comment has been minimized.

Contributor

cherrymui commented Jan 12, 2017

Therefore, it seems registered parameters will mostly help small leaf
functions. And if that's true, I imagine inlining those functions will
actually provide even more speedup because then the compiler can optimize
across function call boundaries.

Inlining doesn't help on function pointer calls.

@bcmills

This comment has been minimized.

Member

bcmills commented Jan 12, 2017

@dr2chase

there's no memory traffic unless there's a spill, and then we must spill somewhere

That's not strictly true. Empty stack slots still decrease cache locality by forcing the actual in-use stack across a larger number of cache lines. There's no extra memory traffic between the CPU and cache unless there's a spill, but reserving stack slots may well increase memory traffic on the bus.

most of the existing ABIs (all but Arm64) are unfriendly to multiple return values in registers

Not true. It would be trivial to extend the SysV AMD64 ABI for multiple return values, for example: it already has two return registers (rax and rdx), and it's easy enough to define extensions of the ABI that pass additional Go return values in other caller-save registers.

For example, we could do something like:

rax: varargs count; 1st return
rbx: callee-saved
rcx: 4th argument; 3rd return
rdx: 3rd argument; 2nd return
rsp: stack pointer
rbp: frame pointer
rsi: 2nd argument
rdi: 1st argument
r8: 5th argument; 4th return
r9: 6th argument; 5th return
r11: temporary register
r12-r14: callee-saved
r15: GOT base pointer
@mundaym

This comment has been minimized.

Member

mundaym commented Jan 13, 2017

Am I understanding correctly that this proposal is calling for structs to always be unpacked into registers before a call? Has any thought been given to passing structs as read-only references? I think this is how most (all?) of the ELF ABIs handle structs, particularly larger ones.

This way the callee gets to decide whether it needs to create a local copy of anything, avoiding copying in many cases. It is also presumably fairly common for only a subset of struct fields to be accessed so unpacking all or part of the struct may be unecessarily expensive (particularly if the struct has boolean fields). Obviously the reference would have to be to something on the stack (copied there, if necessary, by the caller) or to read-only memory.

For Go in particular it seems like this would be a nice fit because the only way to get const-like behaviour is to pass structs by value and internally passing them by reference instead would potentially make that idiom a lot cheaper.

Arrays and maybe slices could also be handled the same way.

@minux

This comment has been minimized.

Member

minux commented Jan 13, 2017

@bcmills

This comment has been minimized.

Member

bcmills commented Jan 18, 2017

Adding callee-saves seems to guarantee a lot more meddling in existing assembly language, unless we come up with some clever plan. If the clever plan were a tool, the tool must notice writes to callee-save registers, modify stack frame to add spill area for those registers, include FUNCDATA to indicate presence of spill. How does the tool know where in the assembly frame to put the spills?

We don't need to notice writes to callee-save registers: we can spill them unconditionally. (If that's a performance problem, the trivial workaround is to rewrite the assembly function to use the new calling convention.)

We would need to do the spills at the point where we change calling conventions: when we go to call across conventions, we spill all the callee-saved registers, then copy all of the arguments to the stack, call the assembly function, copy return-parameters to registers, restore callee-saved registers, and we're done.

IIUC, that would only require FUNCDATA to indicate the spills at the call sites, for which we would generate the same FUNCDATA as for any other spill (indicating that those registers do not contain GC roots after the spill occurs).

Alternately, we could choose the callee-saves registers in this turn of the crank and strongly suggest that anyone rewriting their assembly language now avoid using them, or preemptively add the save/restore code, so that when we do start using them for callee-saves in the future there is not a roadblocking need to repair code in a hurry.

I like that idea. We're already trending a bit in that direction (see https://go-review.googlesource.com/#/c/35068/).

For helping Cgo, might we be better off conforming exactly to the platform ABI for such calls, and only for those calls?

Which calls do you mean, exactly? There are (unfortunately) several routes by which Go functions can end up being called from C ABI functions: at least explicit cgo calls and signal handlers. At any rate, I do think that (at least for platform ABIs that don't have crazy stack bloat) we should try to conform to the platform ABI.

@minux

This comment has been minimized.

Member

minux commented Jan 18, 2017

@minux

This comment has been minimized.

Member

minux commented Jan 18, 2017

@bcmills

This comment has been minimized.

Member

bcmills commented Jan 18, 2017

Thinking about tracebacks. At any given point in the program, we know whether an argument is still valid or not; we shouldn't show invalid arguments in tracebacks, but we may have some arguments unavailable.

The most interesting function arguments are in two categories. One is "things that could have been GC'd but weren't yet", which favors passing in registers without extra spilling for tracebacks. The other is "values which were passed further down the stack". The latter could be traced and displayed via FUNCDATA annotations using a more general version of @ianlancetaylor's suggested approach: if we know where the contents of the registers (and stack slots) came from, we can reconstruct the arguments which are still around because they were passed as arguments (or stored to local variables) further down the call chain.

Does DWARF already support that kind of value-propagation annotation?

@dr2chase

This comment has been minimized.

Contributor

dr2chase commented Jan 19, 2017

Can you clarify "things that could have been GC'd but weren't yet" ?

Do you mean pointers that are functionally dead, and if a GC has occurred might now point to reused memory, or something else? That's a bad idea unless there is some indication (to debugger and/or traceback) that the pointer is stale and that examining its referent might lead to confusion.

@bcmills

This comment has been minimized.

Member

bcmills commented Jan 19, 2017

Do you mean pointers that are functionally dead, and if a GC has occurred might now point to reused memory, or something else?

I mean pointers for which:

  • the last remaining reference is a function argument and
  • the execution of the corresponding function call has passed the last use of that argument.

If function arguments are passed on the stack (or spilled for the lifetime of the function), then collecting those pointers requires extra work. On the other hand, if function arguments are passed in registers (and only spilled if they have live references after the spill), then collecting those pointers is automatic — and preserving them for display in tracebacks requires extra work.

@minux

This comment has been minimized.

Member

minux commented Jan 19, 2017

@randall77

This comment has been minimized.

Contributor

randall77 commented Jan 19, 2017

We're already in a state where what is displayed in the traceback is context-dependent.

func f(x *int) {
   x = new(int)
   g()
   .. use x ..
}

If there is a panic during the call to g, the value displayed in the stack traceback for x's argument slot will show the newly allocated pointer, not the original value of x at the start of the function.

The reason this happens is that the newly allocated pointer is spilled to the argument slot for x. We used to do this to ensure that the old value x pointed to could be garbage collected during the call to g. Now that (as of go 1.8) we use our precise liveness information for arguments (and have runtime.KeepAlive available to override if necessary), we could revisit this decision. It would cost some stack space, but probably not much.

In any case, all that is pretty tangential to the proposal at hand. Not making tracebacks worse is certainly worth discussing here. Making them better should be a separate proposal.

@minux

This comment has been minimized.

Member

minux commented Jan 19, 2017

@randall77

This comment has been minimized.

Contributor

randall77 commented Jan 19, 2017

That the proposals conflict is ok. And we can discuss those conflicts here. But without a concrete proposal about how you'd like to make tracebacks better it's hard to see exactly what those conflicts would be.

@rsc rsc changed the title from proposal: function and method arguments and results should be passed in registers to proposal: cmd/compile: define register-based calling convention Jan 23, 2017

@aarzilli

This comment has been minimized.

Contributor

aarzilli commented Feb 3, 2017

I'm interested in knowing how this would interact with non-optimized compilation. Right now non-optimized compilations registerize sparingly, which is great for debuggers since the go compiler isn't good at saving informations about registerization. Putting a lot more things into registers without getting better at writing the appropriate debug symbols would be bad.

@bcmills

This comment has been minimized.

Member

bcmills commented Feb 3, 2017

I'm interested in knowing how this would interact with non-optimized compilation. Right now non-optimized compilations registerize sparingly, which is great for debuggers since the go compiler isn't good at saving informations about registerization.

Presumably part of the changes for the calling convention would be making the compiler emit more accurate DWARF info for register parameters.

@rsc

This comment has been minimized.

Contributor

rsc commented Apr 3, 2017

On hold until @dr2chase is ready to proceed.

@rsc rsc added the Proposal-Hold label Apr 3, 2017

@dr2chase

This comment has been minimized.

Contributor

dr2chase commented Apr 3, 2017

Current plan is 1.10, time got reallocated to loop unrolling and whatever help was required to get the Dwarf support better in general, so that we have the ability to say what we're doing with parameters.

There's been a lot of churn in the compiler in the last couple of months -- new source position, pushing the AST through, making things more ready for running in parallel, and moving liveness into the SSA form -- so I am okay with waiting a release.

@smasher164

This comment has been minimized.

Contributor

smasher164 commented Oct 31, 2017

Would the proposed calling convention omit frame pointers for leaf functions? I can see this varying based on whether the function stores the state of a callee-save register, takes the address of a local variable, etc. In that case, is there still a feasible way to obtain the call stack?

@wkhere

This comment has been minimized.

wkhere commented Jan 6, 2018

A naive proposal:

maybe a good optimization would be, instead of changing calling convention, to focus on eliminating local variables on stack by using registers, and sharing as much as possible the stack frame space between variables and arguments/return values of subcalls.

This way (1) registers would speed up things, (2) stack frames would be smaller, (3) existing calling convention would be preserved.

@davecheney

This comment has been minimized.

Contributor

davecheney commented Jan 6, 2018

@Quasilyte

This comment has been minimized.

Contributor

Quasilyte commented Feb 7, 2018

I can be wrong, but register-based calling convention (CC) could give more performance boost if additional SSA rules are added that actually take advantage of it.

Currently, a big amount of small and average functions have many mem->reg->mem patterns that make certain optimizations on amd64 not worth it.

It is not fair to say that 10% is not significant.
It's 10% with optimizer that works on GOARCH=amd64 like it is GOARCH=386.
The potential gain is higher.

Lately, I was comparing 6g with gccgo and even without machine-dependent optimizations, aggressive inlining and constant folding there was about 25-40% performance difference in some allocation-free benchmarks. I do believe that this difference includes CC impact (because most other parts of output machine code look nearly the same).

@laboger

This comment has been minimized.

Contributor

laboger commented Feb 8, 2018

As noted above, the performance benefit for a register-based calling convention is higher on RISC architectures like ppc64le & ppc64. If the above CL is not stale I would be willing to try it out. To me this is one of the biggest performance issues with golang on ppc64le.

Inlining does help but doesn't handle all cases if there are conditions that inhibit inlining. Also functions written in asm can't be inlined AFAIK.

@dr2chase

This comment has been minimized.

Contributor

dr2chase commented Feb 8, 2018

The CL is very stale. First we have to get to a good place with debugger information (seems very likely for 1.10 [oops, 1.11]), and then we have to redo some of the experiment (it will go more quickly this time) with a better set of benchmarks. Some optimizations that looked promising on the go1 benchmarks turn out to look considerably less profitable when run against a wider set of benchmarks.

@as

This comment has been minimized.

Contributor

as commented Feb 9, 2018

@dr2chase

Is the method used to determine the 5% performance improvement somewhere in the discussion? I'm curiously interested in what the function sample space actually looks like.

@laboger

This comment has been minimized.

Contributor

laboger commented Feb 9, 2018

I think the measurement we want is how much does this help once inlining is fully enabled by default.

@dr2chase

This comment has been minimized.

Contributor

dr2chase commented Feb 9, 2018

The estimate was based on counting call frequencies, looking at performance improvement on a small number of popular calls "registerized", and extrapolating that to the full number of calls.

I agree that we want to try better inlining first (it's in the queue for 1.11, now that we have that debugging information fixed), since that will cut the call count (and thus reduce the benefit) of this somewhat more complicated optimization. Mid-stack inlining, however, was one of the optimizations that looked a lot less good when applied to the larger suite of benchmarks. One problem with the larger benchmark suite is selection effect -- anyone who wrote a benchmark for parts of their application cares enough about a performance to write that benchmark, and has probably used it already to hand-optimize around rough spots in the Go implementation, so we'll see reduced gains from optimizing those rough spots.

@CAFxX

This comment has been minimized.

Contributor

CAFxX commented Feb 10, 2018

and has probably used it already to hand-optimize around rough spots in the Go implementation

From my point of view, the goal of having a better inliner is to have more readable/maintainable (less "hand-optimized") code, not really faster code. So that we can build code addressing the problems we're trying to solve, not the shortcomings of the compiler.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment