Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cmd/compile: various low level x86 instruction generation improvements #28671

martisch opened this Issue Nov 8, 2018 · 6 comments


None yet
5 participants
Copy link

martisch commented Nov 8, 2018

While reading (to much) go generated assembly code I picked up a few x86 code sequences that seemed sub optimal. I do not remember where I had spotted each of them and some might just come from my imagination, compiler optimization guides or from outside the std library.

Instead of creating an issue per possibility here is a list of some possible low level performance improvements. Note that this does not mean they are common and therefore worth introducing. That can be evaluated. However these can serve to spark ideas for other improvements and for new compiler contributors to try out adding ssa optimization rules or codegen improvements and benchmarking their effects and frequency. UPDATE: CLs should make sure to include statistics/examples of use in std lib and/or generally when introducing optimizations.

Current assembly gc and gccgo create can be quickly checked with
Given the low level nature there might be oversights in what is possible and whether they are size or performance improvements. As always needs benchmarks and tests.

Many of them should be considered as examples for more general optimizations.

The list:

  1. no baseless lea:
    There is no lea without a base and only index*operandsize resulting in x+x+x+x being compiled to leaq+addq instead of a single leaq[0+x*4] with some more combination rules.

  2. to many imul
    x*x*x*x is compiled as 3 imuls when 2 are sufficient.

  3. set all bits in a register (UPDATE: not generally worth it due to false dependency)
    Instead of MOVQ $-0x1, Reg use ORQ $-0x1, Reg which is shorter by ~4 bytes for 64bit ints but may create a false dependency as noted by @randall77.

  4. comparing modulo
    x % 2 == 0 can be andl $1, %eax, testq %rax, %rax (or btl $0, AX ...) instead of shift+shift+add+shift+shift+cmp

  5. instead of add use subtract for some powers of 2
    sub -128 is shorter than add 128. (watch out that flags are not used)

  6. for some powers of 2 less equal is better than less of a bit more
    x < 128 encodes to CMPQ $0x80, AX; JG which is larger than CMPQ $0x7f, AX; JGE. Should work similar for other comparisons encoding of constants.

  7. unsigned division with int
    If it is known the int divisor is positive instead of CQO+IDIV a XOR+DIV could be used.

  8. optimize modulo with shifts that produce power of 2
    var x,y uint; x % (1<<y) can be replaced with x & ((1<<y)-1).

@josharian @randall77 @TocarIP @quasilyte


This comment has been minimized.

Copy link

randall77 commented Nov 8, 2018

Instead of MOVQ $-0x1, Reg use ORQ $-0x1, Reg which is shorter.

Does this have a false dependency on the previous value of Reg?
XORL AX, AX is a special case which breaks the dependency, not sure whether ORQ $-1, AX also does.


This comment has been minimized.

Copy link
Member Author

martisch commented Nov 8, 2018

Thanks. You are right that ORQ could introduce and unwarranted dependency (or at least its not known that all x86 do optimize the dependency away) . Likely not always a win unless optimizing for size only and needs to much complex analyses to be sure its a win in the concrete instruction flow.
Added as feedback to the list.


This comment has been minimized.

Copy link

josharian commented Nov 8, 2018

Somewhat related: #21439.


This comment has been minimized.

Copy link

seebs commented Nov 8, 2018

does "shorter" relate at all to runtime performance? i suppose smaller code improves cache performance, in general, all else being equal.


This comment has been minimized.

Copy link
Member Author

martisch commented Nov 9, 2018

does "shorter" relate at all to runtime performance?

Granted these are hard to measure especially in micro benchmarks. Smaller binary footprint means less cache use. Depending on the instruction and microarchitecture the instruction decoders can also decode more short/simple instructions per cycle and some archs seem to "only" fetch 16/32bytes of instructions per cycle. More instructions will fit into some loop buffers if they are smaller in bytes and therefore more loops will be able to make use of loop buffers and thereby could execute faster.


This comment has been minimized.

Copy link

gopherbot commented Nov 14, 2018

Change mentions this issue: cmd/compile/internal/ssa: optimized x*x*x*x to only 2 imuls

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.