Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cmd/compile: teach prove pass that x|1 > 0 for unsigned x #46444

Closed
josharian opened this issue May 29, 2021 · 6 comments
Closed

cmd/compile: teach prove pass that x|1 > 0 for unsigned x #46444

josharian opened this issue May 29, 2021 · 6 comments

Comments

@josharian
Copy link
Contributor

@josharian josharian commented May 29, 2021

package p

import "math/bits"

func f(x uint32) int {
    return bits.Len32(x|1)
}

Compiles to:

TEXT    "".f(SB), NOSPLIT|ABIInternal, $0-16
        FUNCDATA        $0, gclocals·33cdeccccebe80329f1fdbee7f5874cb(SB)
        FUNCDATA        $1, gclocals·33cdeccccebe80329f1fdbee7f5874cb(SB)
        MOVL    "".x+8(SP), AX
        ORL     $1, AX
        LEAQ    (AX)(AX*1), AX
        LEAQ    1(AX), AX
        BSRQ    AX, AX
        MOVQ    AX, "".~r1+16(SP)
        RET

Instead of the LEAQ/LEAQ/BSRQ triple of instructions, it should be a single BSRL.

There are two parts to this.

Part one is introducing OpBitLenNNNonZero ops, much as we have OpCtzNNNonZero ops, and hooking them up appropriately. This is easy.

Part two is teaching the prove pass that unsigned x|1 > 0.

Seems like it should be a simple thing to teach it, but I always get lost in the prove pass. Any hints?

cc @zdjones @dr2chase @randall77

@randall77
Copy link
Contributor

@randall77 randall77 commented May 29, 2021

Not sure about the prove pass. Might be easiest in the section where we find length and capacity ops. When you find v=(OR x (CONST c)) add f.update(b, v, c, unsigned, gt|eq).

It might be easier just to do, in generic.rules

(BitLen (OR x (CONST [c]))) && c != 0 => (BitLenNonZero (OR x (CONST [c]))).

But then maybe you also want to handle if x != 0 { ... bits.Len32(x) }. That would require prove pass modifications.

Loading

@josharian
Copy link
Contributor Author

@josharian josharian commented May 29, 2021

Thanks! Given that this is fairly specialized, doing it all in rewrite rules makes sense.

Loading

@josharian
Copy link
Contributor Author

@josharian josharian commented May 31, 2021

Implemented. Surprisingly, it made performance worse.

Loading

@josharian josharian closed this May 31, 2021
@dr2chase
Copy link
Contributor

@dr2chase dr2chase commented Jun 1, 2021

What was your benchmark?

Loading

@cristaloleg
Copy link

@cristaloleg cristaloleg commented Jun 6, 2021

@dr2chase have found this issue from this repo https://github.com/josharian/log10 and probably here is the mentioned benchmark https://github.com/josharian/log10/blob/main/log10_test.go#L114

Loading

@josharian
Copy link
Contributor Author

@josharian josharian commented Jun 13, 2021

Thanks, @cristaloleg. That's the benchmark, plus some minor variations that I was experimenting with. (I no longer have the exact code in front of me, but IIRC they are the obvious changes you might toy with trying to get good generated code out of those functions.)

The rules I added to experiment with were in AMD64.rules:

(BitLen32NonZero <t> x) => (ADDQconst [1] (Select0 <t> (BSRQ x)))

and generic.rules:

(BitLen32 x:(Or32 (Const32 [c]) _)) && c != 0 => (BitLen32NonZero x)

This made the generated code shorter, but slower.

I also added these to AMD64.rules, which generated better code, including one or two other places in the standard library:

// Insert ADCQ.
(CMOVQCS x (ADDQconst [1] x) f) && f.Op != OpAMD64InvertFlags => (Select0 (ADCQconst [0] x f))
(Select0 (ADCQconst [0] x f:(InvertFlags g))) => (CMOVQCS x (ADDQconst [1] x) f)

(ADDQconst [c] (Select0 (ADCQconst [d] x f))) && is32Bit(int64(c)+int64(d)) => (Select0 (ADCQconst [int32(c+d)] x f))

(InvertFlags (InvertFlags f)) => f

I didn't mail these because I was frustrated that the failed to capture many of the cases in which we could use ADCQ. The problem is that we canonicalize many CMOVQCS into CMOVQHI for CSE purposes, including pointlessly inserting InvertFlags to do so. But there's no way to absorb an InvertFlags into ADCQ, so we end up having to rip out the ADCQ to avoid having an InvertFlags reach codegen. This is a general problem with doing optimizations like ADCQ and SBBQ and friends on amd64, and I don't see a clear path forward for it.

Loading

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
4 participants