Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cmd/compile: avoid slow versions of LEA instructions on x86 #21735

martisch opened this issue Sep 1, 2017 · 16 comments

cmd/compile: avoid slow versions of LEA instructions on x86 #21735

martisch opened this issue Sep 1, 2017 · 16 comments


Copy link

@martisch martisch commented Sep 1, 2017

On newer x86 cpus (amd and intel) 3 operand LEA instructions with base, index and offset have a higher latency and less throughput than 2 operand LEA instructions.

The compiler when emitting the instructions could rewrite slow leas into e.g. LEA + ADD instructions where possible (flag clobbering ok) similar how MOV $0 R is rewritten to XOR R R.

Intel® 64 and IA-32 Architectures Optimization Reference Manual Using LEA

For LEA instructions with three source operands and some specific situations, instruction latency has increased to 3 cycles, and must dispatch via port 1:
— LEA that has all three source operands: base, index, and offset.
— LEA that uses base and index registers where the base is EBP, RBP, or R13.

relevant llvm optimization ticket:

/cc @TocarIP @randall77 @josharian

@martisch martisch added this to the Go1.10 milestone Sep 1, 2017
@martisch martisch changed the title cmd/compile: avoid slow versions of LEA instructions on amd64 cmd/compile: avoid slow versions of LEA instructions on x86 Sep 1, 2017
Copy link

@TocarIP TocarIP commented Sep 1, 2017

If we are going to do this, we can split lea into lea+lea, to avoid clobbering flags.
For reference, there are 952 3 operands leas in go tool binary.

grep "lea.*0x.*%.*%.*,%" qaz | grep -v "0x0(" | wc -l

@bradfitz bradfitz modified the milestones: Go1.10, Go1.11 Nov 28, 2017
Copy link

@TocarIP TocarIP commented May 10, 2018

Looks like I found a specific example of this.
After b1df8d6 we convert 32-bit multiplication by 0x101 into slow lea, which caused a performance regression on master vs 1.10 for image/draw benchmarks:

CMYK-8   461µs ± 2%   513µs ± 1%  +11.31%  (p=0.000 n=10+9)
Copy link
Contributor Author

@martisch martisch commented May 11, 2018

Thanks for noticing the regression and finding a specific test case Ilya. I will work on a slow LEA fix within the next week unless somebody else wants to have a go at it earlier.

Copy link

@benshi001 benshi001 commented May 15, 2018

can it be fixed in the assembler? I found that there are inefficient LEA used in assembly code.

Copy link
Contributor Author

@martisch martisch commented May 15, 2018

Fixing it in asm will need a manual fix (or script detecting it + cl to fix) as i dont think we should rewrite handwritten code when translating to binary. Maybe its needed to achieve some alignment of instructions.

Also for asm depending on the context a LEA+ADD or MUL could be better or equally valid instead of LEA+LEA so i would say those should be fixed on a case by case basis with benchmarks if possible.

Maybe some vet/lint check for asm might be nice.

I was working on a fix in the amd64 opcode code that emits the assembler instructions. Similar how MOV of 0 is rewritten as XOR.

Copy link

@randall77 randall77 commented May 21, 2018

I think we should leave assembly as is. Magic in the assembler is always confusing.

So is the fix just to disable the rules that can generate 3-part LEA instructions? Basically, enforce that LEAQ[1248] can't have either an Aux or AuxInt?
Are all LEA affected? LEA[LW]? Also on 386?

This is a (pretty weak) argument for introducing GOAMD64 architecture variants. If we could figure out reliably which chips have this problem and which don't.

Copy link
Contributor Author

@martisch martisch commented May 22, 2018

While looking into this I found and started on 3 different ways to reduce slow LEAs:

  1. rewrite some of the rules that output LEA to prefer LEA2 [c] x over LEA1 [c] x x
  2. add simplification rules LEA1 [c] x x -> LEA2 [c] x ...
    3 rewrite LEA with 3 operands to two LEA when emitting asm for the LEA opcodes.
    3(bonus) possibly for go 1.12: rewrite LEAs to use ADD if Flags can be clobbered since ADD has four possible execution ports (e.g. Ryzen, Skylake) and LEA only 2 on some modern x64 chips.

Still working on testing these as well as gathering statistics about occurrences and rerunning benchmarks so we have slow downs that we know covered.

AFAIK Slow LEAs with 3 operands exist on modern Intel and AMD chips independent of 386 or x64 mode and word width. But 16bit LEA are slower even with 2 operands.

Copy link

@TocarIP TocarIP commented May 22, 2018

We have a bunch of rules for merging lea into mov, so I feel like option 3 will have lowest amount of unintended consequences. It also makes it easier to switch between versions, if we want to e. g. optimize some function for size, instead of speed or update it for future CPUs.

Copy link

@gopherbot gopherbot commented May 25, 2018

Change mentions this issue: cmd/compile: split slow 3 operand LEA instructions into two LEAs

Copy link
Contributor Author

@martisch martisch commented May 26, 2018

did you find a file+line number where a slow lea is created that effects the draw benchmarks?
For 0x101 multiplication it seems a 2 (but not 3) operand lea is created since there is no offset/displacement e.g.:

  draw.go:243		0x112f368		89ca			MOVL CX, DX			
  draw.go:243		0x112f36a		c1e108			SHLL $0x8, CX			
  draw.go:243		0x112f36d		8d0c11			LEAL 0(CX)(DX*1), CX
Copy link

@quasilyte quasilyte commented May 29, 2018

Minimal example for that multiplication is:

package foo

func mul(x uint32) uint32 {
	return x * 0x101
	0x0000 (../foo.go:4)	MOVL	"".x+8(SP), AX
	0x0004 (../foo.go:4)	MOVL	AX, CX
	0x0006 (../foo.go:4)	SHLL	$8, AX
	0x0009 (../foo.go:4)	LEAL	(AX)(CX*1), AX
	0x000c (../foo.go:4)	MOVL	AX, "".~r1+16(SP)
	0x0010 (../foo.go:4)	RET

It used to emit IMUL3L:

	0x0000 (../foo.go:4)	MOVL	"".x+8(SP), AX
	0x0004 (../foo.go:4)	IMUL3L	$257, AX, AX
	0x000a (../foo.go:4)	MOVL	AX, "".~r1+16(SP)
	0x000e (../foo.go:4)	RET

And prior to that, it was IMUL:

	0x0000 (../foo.go:3)	MOVL	"".x+8(SP), AX
	0x0004 (../foo.go:4)	IMULL	$257, AX
	0x000a (../foo.go:4)	MOVL	AX, "".~r1+16(SP)
	0x000e (../foo.go:4)	RET
Copy link
Contributor Author

@martisch martisch commented May 29, 2018

@quasilyte Thanks but i do not think this is a slow LEA version affecting the benchmark. (However the particular rewrite to MOV+SHL+(fast)LEA still looks like to be slower)
LEAL (AX)(CX*1), AX is AFAIK not a slow version (3 operand) of LEA so the posted fix for slow LEAs itself wont fix that particular issue (mult by 0x101).

Copy link

@TocarIP TocarIP commented May 29, 2018

In image/draw.drawCMYK I see (output of perf) following hotspot:

mov    0x4(%rsp),%r8d                                                                                                                                                        
  2.20 │       lea    -0xffff(%r8,%r13,1),%r8d                                                                                                                                              
  0.20 │       neg    %r8d                                                                                                                                                                  
  3.77 │       imul   %eax,%r8d                                                                                                                                                             
       │             g := (0xffff - uint32(m)*0x101) * w / 0xffff                                                                                                                           
  4.77 │       mov    %r15d,%r13d                                                                                                                                                           
  1.37 │       shl    $0x8,%r15d                                                                                                                                                            
  0.02 │       lea    -0xffff(%r13,%r15,1),%r13d                                                                                                                                            
  2.37 │       neg    %r13d                                                                                                                                                                 
  1.96 │       imul   %eax,%r13d                                                                                                                                                            
       │             b := (0xffff - uint32(y)*0x101) * w / 0xffff                                                                                                                           
  1.13 │       mov    %esi,%r15d                                                                                                                                                            
  0.04 │       shl    $0x8,%esi                                                                                                                                                             
  2.33 │       lea    -0xffff(%r15,%rsi,1),%esi                                                                                                                                             
  1.66 │       neg    %esi                                                                                                                                                                  
  1.46 │       imul   %eax,%esi                                                                                                                                                             

So it looks like g := (0xffff - uint32(m)*0x101) * w / 0xffff produces
lea -0xffff(%r13,%r15,1),%r13d , which is a 3-operand lea

Copy link

@dr2chase dr2chase commented Jun 25, 2018

Are there objections to waiting till 1.12? The performance improvements are overall very small (0.16% is what I measure pretty consistently, including across a much larger selection of benchmarks), and it's getting very late in the 1.11 cycle and we seem to be behind schedule.

To summarize the performance changes, across the low-noise 256 benchmarks (out of 488 total, low noise is less than 2% max standard deviation reported across 30 trials) 15 showed >= 1% improvement, 16 showed >= 1% slowdown.

For comparison, the experiment to pretend we are only targeting more-recent versions of intel hardware (CL 117925) had 83 improved, 6 slowed down, and the geomean improvement was 1.28%

@dr2chase dr2chase removed this from the Go1.11 milestone Jun 26, 2018
@dr2chase dr2chase added this to the Go1.12 milestone Jun 26, 2018
Copy link

@dr2chase dr2chase commented Jun 26, 2018

Abusing "early-in-cycle" so we don't forget it, this one is low risk.

gopherbot pushed a commit that referenced this issue Aug 20, 2018
go tool objdump ../bin/go | grep "\.go\:" | grep -c "LEA.*0x.*[(].*[(].*"
Before: 1012
After: 20

Updates #21735

Benchmarks thanks to
Intel(R) Xeon(R) CPU E5-1650 v3 @ 3.50GHz
benchstat -geomean *.stdout | grep -v pkg:
name                                       old time/op    new time/op    delta
FastTest2KB-12                              131.0ns ± 0%   131.0ns ± 0%    ~     (all equal)
BaseTest2KB-12                              601.3ns ± 0%   601.3ns ± 0%    ~     (p=0.942 n=45+41)
Encoding4KBVerySparse-12                    15.38µs ± 1%   15.41µs ± 0%  +0.24%  (p=0.000 n=44+43)
Join_8-12                                    2.117s ± 2%    2.128s ± 2%  +0.51%  (p=0.007 n=48+48)
HashimotoLight-12                           1.663ms ± 1%   1.668ms ± 1%  +0.35%  (p=0.006 n=49+49)
Sha3_224_MTU-12                             4.843µs ± 0%   4.836µs ± 0%  -0.14%  (p=0.000 n=45+42)
GenSharedKeyP256-12                         73.74µs ± 0%   70.92µs ± 0%  -3.82%  (p=0.000 n=48+47)
Run/10k/1-12                                 24.81s ± 0%    24.88s ± 0%  +0.30%  (p=0.000 n=48+47)
Run/10k/16-12                                4.621s ± 2%    4.625s ± 3%    ~     (p=0.776 n=50+50)
Dnrm2MediumPosInc-12                        4.018µs ± 0%   4.019µs ± 0%    ~     (p=0.060 n=50+48)
DasumMediumUnitaryInc-12                    855.3ns ± 0%   855.0ns ± 0%  -0.03%  (p=0.000 n=47+42)
Dgeev/Circulant10-12                        40.45µs ± 1%   41.11µs ± 1%  +1.62%  (p=0.000 n=45+49)
Dgeev/Circulant100-12                       10.42ms ± 2%   10.61ms ± 1%  +1.83%  (p=0.000 n=49+48)
MulWorkspaceDense1000Hundredth-12           64.69ms ± 1%   64.63ms ± 1%    ~     (p=0.718 n=48+48)
ScaleVec10000Inc20-12                       22.31µs ± 1%   22.29µs ± 1%    ~     (p=0.424 n=50+50)
ValidateVersionTildeFail-12                 737.6ns ± 1%   736.0ns ± 1%  -0.22%  (p=0.000 n=49+49)
StripHTML-12                                2.846µs ± 0%   2.806µs ± 1%  -1.40%  (p=0.000 n=43+50)
ReaderContains-12                           6.073µs ± 0%   5.999µs ± 0%  -1.22%  (p=0.000 n=48+48)
EncodeCodecFromInternalProtobuf-12          5.817µs ± 2%   5.555µs ± 2%  -4.51%  (p=0.000 n=47+47)
TarjanSCCGnp_10_tenth-12                    7.091µs ± 5%   7.132µs ± 7%    ~     (p=0.361 n=50+50)
TarjanSCCGnp_1000_half-12                   82.25ms ± 3%   81.29ms ± 2%  -1.16%  (p=0.000 n=50+43)
AStarUndirectedmallWorld_10_2_2_2_Heur-12   15.18µs ± 8%   15.11µs ± 7%    ~     (p=0.511 n=50+49)
LouvainDirectedMultiplex-12                 20.92ms ± 1%   21.00ms ± 1%  +0.36%  (p=0.000 n=48+49)
WalkAllBreadthFirstGnp_10_tenth-12          2.974µs ± 4%   2.964µs ± 5%    ~     (p=0.504 n=50+50)
WalkAllBreadthFirstGnp_1000_tenth-12        9.733ms ± 4%   9.741ms ± 4%    ~     (p=0.774 n=48+50)
TextMovementBetweenSegments-12              432.8µs ± 0%   433.2µs ± 1%    ~     (p=0.128 n=50+50)
Growth_MultiSegment-12                      13.11ms ± 0%   13.19ms ± 1%  +0.58%  (p=0.000 n=44+46)
AddingFields/Zap.Sugar-12                   1.296µs ± 1%   1.310µs ± 2%  +1.09%  (p=0.000 n=43+43)
AddingFields/apex/log-12                    34.19µs ± 1%   34.31µs ± 1%  +0.35%  (p=0.000 n=45+45)
AddingFields/inconshreveable/log15-12       30.08µs ± 2%   30.07µs ± 2%    ~     (p=0.803 n=48+47)
AddingFields/sirupsen/logrus-12             6.683µs ± 3%   6.735µs ± 3%  +0.78%  (p=0.000 n=43+42)

[Geo mean]                                  143.5µs        143.3µs       -0.16%

Change-Id: I637203c75c837737f1febced75d5985703e51044
Run-TryBot: Martin Möhrmann <>
TryBot-Result: Gobot Gobot <>
Reviewed-by: David Chase <>
Reviewed-by: Ilya Tocar <>
@bradfitz bradfitz modified the milestones: Go1.12, Go1.13 Nov 3, 2018
@andybons andybons modified the milestones: Go1.13, Go1.14 Jul 8, 2019
@rsc rsc modified the milestones: Go1.14, Backlog Oct 9, 2019
Copy link

@josharian josharian commented Jan 30, 2020

See also #36468 (comment), for an approach that can reduce the number of slow LEAs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet