You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A conversion that involves runtime.getitab (convI2I, assertI2I, assertE2I) is generally much slower than an ordinary
interface value construction from a statically known concrete type.
this is fast: *bytes.Buffer => io.Writer
this is not as fast: io.ReadWriter => io.Writer
The Go compiler has an IR-based devirtualization pass that performs a local method call devirtualization. It's not as silly as it might sound as it works well enough in some situations thanks to the inlining.
It does not, however, handle I2I-like operations. Even if we know that converted value has some non-interface type T, we still do an expensive I2I operation.
This situation occurs a lot in a codebase that works with hierarchical data. AST is an example: we have ast.Node and ast.Expr. It's quite common to write a function that accepts ast.Node while some other function can operate with ast.Expr. In the *ast.Ident -> ast.Expr -> ast.Node chain we can simplify the ast.Expr -> ast.Node conversion if we use the information that ast.Expr is actually *ast.Ident.
A real-world case can be found in the Go compiler code.
Another situation is when constructor returns an interface type, like hash.Hash and then they're passed as io.Writer. In many cases we can avoid convI2I for md5.New() -> io.Writer case (it works for most hash/crypto related constructors).
Here is a simple benchmark that illustrates the performance problem with I2I:
name old time/op new time/op delta
Devirt-8 11.1ns ± 1% 2.4ns ± 1% -78.55% (p=0.000 n=10+10)
The same idea applies to the type assertions that involve interface-to-interface conversion.
The optimized code is also usually smaller from the machine code point of view.
.text segment size differences:
go tool: -224 bytes
cmd/asm: -192 bytes
Total binary size differences:
go tool: -543 bytes
cmd/asm: -310 bytes
In general, this is not a binary size optimization as convI2I is quite rare on its own.
There are even fewer cases that we can optimize at the compile time.
I provided these numbers just to be sure that we cover the binary size impact as well.
Note: this does not solve all convI2I issues, but it can at least reduce the amount of convI2I we see in our CPU profiles.
I'll send a CL that provides my first attempt at this optimization. If CL is not good enough, we can at least have this issue that has some sweet numbers to think about.
Changing devirtualize.go from ir.VisitList to ir.EditChildren makes it measurably slower.
This is why a slightly less simple approach is used when we keep ir.VisitList, but handle some nodes via their parents. It covers less code, but in practice the optimization coverage should be OK. Suggestions are welcome: it could be the case that we can introduce these optimizations to some other part of the compiler.
This approach runs with almost identical speed, compilebench shows no significant diff this time: