suboptimal codegen when initializing an array of padded structs #122274
Labels
A-codegen
Area: Code generation
A-LLVM
Area: Code generation parts specific to LLVM. Both correctness bugs and optimization-related issues.
C-optimization
Category: An issue highlighting optimization opportunities or PRs implementing such
I-heavy
Issue: Problems and improvements with respect to binary size of generated code.
T-compiler
Relevant to the compiler team, which will review and decide on the PR/issue.
in the following code,
reset_implicit_padding
generates somewhat worse assembly under-C opt-level=3
thanreset_explicit_padding
. with explicit padding generates to amemset
- i'd hoped both wouldmemset
:which compiles to these two implementations:
godbolt for reference.
it turns out that when an array like this does become a memset it's seemingly often because llvm realized in loop-idiom that the stores can be turned into a memset, rather than ever being emitted as a memset from rustc?
i don't know enough to know if this would need rustc to initialize padding bytes as undef (?), or if they're already undef and there's some other reason llvm doesn't figure out this could be a memset. i also can't tell if this would be in conflict with this issue about padding copies or the regression test for it. there are certainly cases where copying large amounts of padding would be counterproductive, though if writing a padding byte could turn 4+2+1 bytes of stores into one 8-byte store, that's probably always a net win.
(the original code i've minimized had 6-byte or 14-byte structs aligned to 8 and 16 respectively, and for entirely indecipherable reasons also got the assignment fully unrolled, yielding ~600
mov
rather than a memset. i would much rather that be a call to memset, and have added a_pad
field for the time being.)The text was updated successfully, but these errors were encountered: