Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[InstCombine] Canonicalize constant GEPs to i8 source element type #68882

Merged
merged 1 commit into from Jan 24, 2024

Conversation

nikic
Copy link
Contributor

@nikic nikic commented Oct 12, 2023

This patch canonicalizes getelementptr instructions with constant indices to use the i8 source element type. This makes it easier for optimizations to recognize that two GEPs are identical, because they don't need to see past many different ways to express the same offset.

This is a first step towards https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699. This is limited to constant GEPs only for now, as they have a clear canonical form, while we're not yet sure how exactly to deal with variable indices.

The test llvm/test/Transforms/PhaseOrdering/switch_with_geps.ll gives two representative examples of the kind of optimization improvement we expect from this change. In the first test SimplifyCFG can now realize that all switch branches are actually the same. In the second test it can convert it into simple arithmetic. These are representative of common optimization failures we see in Rust.

Fixes #69841.

Copy link
Contributor Author

@nikic nikic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I looked through the test diffs, and it seems like the only substantial regressions are all related to the indexed compare fold, which should be made type-independent.

; CHECK-NEXT: [[T16:%.*]] = call i32 (ptr, ...) @printf(ptr noundef nonnull dereferenceable(1) @.str1, ptr nonnull [[T12]]) #[[ATTR0]]
; CHECK-NEXT: [[T84:%.*]] = icmp eq i32 [[INDVAR]], 0
; CHECK-NEXT: [[T84:%.*]] = icmp eq ptr [[T12]], [[ORIENTATIONS]]
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Index compare fold regression.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Turns out this one is not the indexed compare fold, but the icmp %p, gep(%p) fold. Now we have two GEPs though and it no longer triggers.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(I still plan to fix this remaining regression, but I don't think it needs to block this PR.)

llvm/test/Transforms/InstCombine/indexed-gep-compares.ll Outdated Show resolved Hide resolved
llvm/test/Transforms/InstCombine/opaque-ptr.ll Outdated Show resolved Hide resolved
; CHECK-NEXT: [[GEP2:%.*]] = getelementptr i64, ptr [[P]], i64 2
; CHECK-NEXT: [[S:%.*]] = select i1 [[C:%.*]], ptr [[GEP1]], ptr [[GEP2]]
; CHECK-NEXT: [[S_V:%.*]] = select i1 [[C:%.*]], i64 4, i64 16
; CHECK-NEXT: [[S:%.*]] = getelementptr i8, ptr [[P:%.*]], i64 [[S_V]]
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Improvement.

; CHECK-NEXT: [[GEP:%.*]] = getelementptr i32, ptr [[PHI]], i64 1
; CHECK-NEXT: [[TMP1:%.*]] = phi i64 [ 4, [[IF]] ], [ 16, [[ELSE]] ]
; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[P:%.*]], i64 [[TMP1]]
; CHECK-NEXT: [[GEP:%.*]] = getelementptr i8, ptr [[TMP2]], i64 4
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Improvement.

; CHECK-NEXT: [[TMP1:%.*]] = getelementptr [6 x i32], ptr [[ARG:%.*]], i64 3, i64 [[ARG1:%.*]]
; CHECK-NEXT: ret ptr [[TMP1]]
; CHECK-NEXT: [[TMP1:%.*]] = getelementptr i8, ptr [[ARG:%.*]], i64 72
; CHECK-NEXT: [[TMP2:%.*]] = getelementptr [6 x i32], ptr [[TMP1]], i64 0, i64 [[ARG1:%.*]]
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Possible regression. We kind of want to move in his direction, though it happens by accident here.

nikic added a commit that referenced this pull request Nov 28, 2023
…71663)

The indexed compare fold converts comparisons of GEPs with same
(indirect) base into comparisons of offset. Currently, it only supports
GEPs with the same source element type.

This change makes the transform operate on offsets instead, which
removes the type dependence. To keep closer to the scope of the original
implementation, this keeps the limitation that we should only have at
most one variable index per GEP.

This addresses the main regression from
#68882.

TBH I have some doubts that this is really a useful transform (at least
for the case where there are extra pointer users, so we have to
rematerialize pointers at some point). I can only assume it exists for a
reason...
@@ -2282,6 +2282,15 @@ Instruction *InstCombinerImpl::visitGetElementPtrInst(GetElementPtrInst &GEP) {
if (MadeChange)
return &GEP;

// Canonicalize constant GEPs to i8 type.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW, this is a bit interesting downstream with non-8-bit addressing units :-)

Today i8 would be quite rare as GEP type for us as it is smaller than the addressing unit size. But I figure that canonicalizing to some other type downstream could end up as a lot of work (or lots of missed optimizations).

Afaict accumulateConstantOffset is returning an offset that depends on TypeStoreSize. So as long as this is used in a way so that the address still would be given by baseptr + DL.getTypeStoreSize(i8) * Offset and not baseptr + 8 * Offset , then I guess things will be fine (i.e not assuming that the offset is an 8-bit offset).

As far as I can tell you could just as well have picked i1 instead of i8 (given that DL.getTypeStoreSize(i1)==DL.getTypeStoreSize(i8)). That would probably look confusing, but that is what happens for us when using i8 as type as we can't address individual octets.

(I also see this as a reminder for looking at the ptradd RFC to understand how that will impact us, so that we are prepared for that.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GEP operates in terms of bytes, not bits. The size of i8 is required to be 1. GEP doesn't care how the mapping from size to size in bits looks like. So if you want to map i8 to 32 bits, then that should be fine as far as GEP/ptradd are concerned (though of course breaks all kinds of other assumptions).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that i8 GEPs are already generated by some important components like SCEVExpander, so it's pretty likely that your target should handle them already (unless you are currently patching all the places generating them of course).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. So things can be expected to just work (given getTypeStoreSize(i8)==1), even when the addressing unit isn't 8 bits.

Since

%gep = getelementptr i3, ptr %p, i16 1
%gep = getelementptr i8, ptr %p, i16 1
%gep = getelementptr i16, ptr %p, i16 1

all are equivalent (for my target), then this patch just makes that more obvious by canonicalizing them into a single form.
So we just need to update some lit test checks to expect "getelementptr i8" instead of "getelementptr i16" downstream, and hopefully things will be fine.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reading about the planned steps to move from getelementptr to ptradd, I would appreciate the opportunity to not add more hardcoded instances of i8 (i.e. isIntegerTy(8) or getInt8Ty below) in this effort. You don't need an 8 bit container, you want the canonical type that has getTypeStoreSize(Ty) == 1, it would be much better if there was a minimal API that is just a slight abstraction over i8 but that makes this goal apparent. (I would personally use something with the terminology "addressable unit type", so "isAddressableUnitType", "getAddressableUnitType" etc.)

For context, we are also a downstream user in architectural contexts with memories that are not byte addressable. This can be a main memory that is word addressable and does not have byte or sub-word access (so the canonical unit type is naturally i16 or i24... although i8 would still work), but also additional memories that cannot hold integers at all, for example, a separate memory (address space) for vectors (the canonical unit type is a vector and i8 is not storable in that memory/address space). Whenever there are hardcoded instances of i8 in the code base, we typically have to review them in our tree (yes, SCEVExpander). But it makes a difference whether the code is assuming an 8 bit container, or just needs a type with getTypeStoreSize(Ty) == 1.

@nikic nikic marked this pull request as ready for review December 20, 2023 11:11
@llvmbot llvmbot added clang Clang issues not falling into any other category backend:AMDGPU backend:RISC-V coroutines C++20 coroutines llvm:analysis llvm:transforms clang:openmp OpenMP related changes to Clang labels Dec 20, 2023
@llvmbot
Copy link
Collaborator

llvmbot commented Dec 20, 2023

@llvm/pr-subscribers-backend-risc-v
@llvm/pr-subscribers-backend-amdgpu

@llvm/pr-subscribers-clang

Author: Nikita Popov (nikic)

Changes

This patch canonicalizes getelementptr instructions with constant indices to use the i8 source element type. This makes it easier for optimizations to recognize that two GEPs are identical, because they don't need to see past many different ways to express the same offset.

This is a first step towards https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699. This is limited to constant GEPs only for now, as they have a clear canonical form, while we're not yet sure how exactly to deal with variable indices.

The test llvm/test/Transforms/PhaseOrdering/switch_with_geps.ll gives two representative examples of the kind of optimization improvement we expect from this change. In the first test SimplifyCFG can now realize that all switch branches are actually the same. In the second test it can convert it into simple arithmetic. These are representative of common enum optimization failures we see in Rust.


Patch is 775.99 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/68882.diff

174 Files Affected:

  • (modified) clang/test/CodeGen/PowerPC/builtins-ppc-pair-mma.c (+5-5)
  • (modified) clang/test/CodeGen/aarch64-ls64-inline-asm.c (+9-9)
  • (modified) clang/test/CodeGen/attr-arm-sve-vector-bits-bitcast.c (+24-24)
  • (modified) clang/test/CodeGen/attr-riscv-rvv-vector-bits-bitcast.c (+12-12)
  • (modified) clang/test/CodeGen/cleanup-destslot-simple.c (+2-2)
  • (modified) clang/test/CodeGen/hexagon-brev-ld-ptr-incdec.c (+3-3)
  • (modified) clang/test/CodeGen/ms-intrinsics.c (+6-6)
  • (modified) clang/test/CodeGen/nofpclass.c (+4-4)
  • (modified) clang/test/CodeGen/union-tbaa1.c (+2-2)
  • (modified) clang/test/CodeGenCXX/RelativeVTablesABI/dynamic-cast.cpp (+1-1)
  • (modified) clang/test/CodeGenCXX/RelativeVTablesABI/type-info.cpp (+1-3)
  • (modified) clang/test/CodeGenCXX/microsoft-abi-dynamic-cast.cpp (+6-6)
  • (modified) clang/test/CodeGenCXX/microsoft-abi-typeid.cpp (+1-1)
  • (modified) clang/test/CodeGenObjC/arc-foreach.m (+2-2)
  • (modified) clang/test/CodeGenObjCXX/arc-cxx11-init-list.mm (+1-1)
  • (modified) clang/test/Headers/__clang_hip_math.hip (+12-12)
  • (modified) clang/test/OpenMP/bug57757.cpp (+6-6)
  • (modified) llvm/lib/Transforms/InstCombine/InstructionCombining.cpp (+9)
  • (modified) llvm/test/Analysis/BasicAA/featuretest.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/vector-alloca-bitcast.ll (+6-6)
  • (modified) llvm/test/CodeGen/BPF/preserve-static-offset/load-inline.ll (+2-2)
  • (modified) llvm/test/CodeGen/BPF/preserve-static-offset/load-unroll-inline.ll (+2-2)
  • (modified) llvm/test/CodeGen/BPF/preserve-static-offset/load-unroll.ll (+4-4)
  • (modified) llvm/test/CodeGen/BPF/preserve-static-offset/store-unroll-inline.ll (+2-2)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/vector-align-tbaa.ll (+27-27)
  • (modified) llvm/test/Transforms/Coroutines/coro-async.ll (+1-1)
  • (modified) llvm/test/Transforms/Coroutines/coro-retcon-alloca-opaque-ptr.ll (+1-1)
  • (modified) llvm/test/Transforms/Coroutines/coro-retcon-alloca.ll (+1-1)
  • (modified) llvm/test/Transforms/Coroutines/coro-retcon-once-value.ll (+3-3)
  • (modified) llvm/test/Transforms/Coroutines/coro-retcon-resume-values.ll (+4-4)
  • (modified) llvm/test/Transforms/Coroutines/coro-swifterror.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/2007-03-25-BadShiftMask.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/2009-01-08-AlignAlloca.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/2009-02-20-InstCombine-SROA.ll (+16-16)
  • (modified) llvm/test/Transforms/InstCombine/X86/x86-addsub-inseltpoison.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/X86/x86-addsub.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/add3.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/array.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/assume.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/cast_phi.ll (+2-2)
  • (modified) llvm/test/Transforms/InstCombine/catchswitch-phi.ll (+2-2)
  • (modified) llvm/test/Transforms/InstCombine/compare-alloca.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/extractvalue.ll (+2-2)
  • (modified) llvm/test/Transforms/InstCombine/gep-addrspace.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/gep-canonicalize-constant-indices.ll (+9-9)
  • (modified) llvm/test/Transforms/InstCombine/gep-combine-loop-invariant.ll (+3-3)
  • (modified) llvm/test/Transforms/InstCombine/gep-custom-dl.ll (+2-2)
  • (modified) llvm/test/Transforms/InstCombine/gep-merge-constant-indices.ll (+7-7)
  • (modified) llvm/test/Transforms/InstCombine/gep-vector-indices.ll (+4-4)
  • (modified) llvm/test/Transforms/InstCombine/gep-vector.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/gepphigep.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/getelementptr.ll (+25-24)
  • (modified) llvm/test/Transforms/InstCombine/icmp-custom-dl.ll (+2-2)
  • (modified) llvm/test/Transforms/InstCombine/icmp-gep.ll (+4-4)
  • (modified) llvm/test/Transforms/InstCombine/indexed-gep-compares.ll (+8-8)
  • (modified) llvm/test/Transforms/InstCombine/intptr1.ll (+10-10)
  • (modified) llvm/test/Transforms/InstCombine/intptr2.ll (+2-2)
  • (modified) llvm/test/Transforms/InstCombine/intptr3.ll (+2-2)
  • (modified) llvm/test/Transforms/InstCombine/intptr4.ll (+2-2)
  • (modified) llvm/test/Transforms/InstCombine/intptr5.ll (+2-2)
  • (modified) llvm/test/Transforms/InstCombine/intptr7.ll (+2-2)
  • (modified) llvm/test/Transforms/InstCombine/load-store-forward.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/load.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/loadstore-metadata.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/memchr-5.ll (+24-24)
  • (modified) llvm/test/Transforms/InstCombine/memchr-9.ll (+23-23)
  • (modified) llvm/test/Transforms/InstCombine/memcmp-3.ll (+28-28)
  • (modified) llvm/test/Transforms/InstCombine/memcmp-4.ll (+4-4)
  • (modified) llvm/test/Transforms/InstCombine/memcmp-5.ll (+13-13)
  • (modified) llvm/test/Transforms/InstCombine/memcmp-6.ll (+6-6)
  • (modified) llvm/test/Transforms/InstCombine/memcmp-7.ll (+11-11)
  • (modified) llvm/test/Transforms/InstCombine/memcpy_alloca.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/memrchr-5.ll (+32-32)
  • (modified) llvm/test/Transforms/InstCombine/memset2.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/multi-size-address-space-pointer.ll (+7-7)
  • (modified) llvm/test/Transforms/InstCombine/non-integral-pointers.ll (+2-2)
  • (modified) llvm/test/Transforms/InstCombine/opaque-ptr.ll (+20-22)
  • (modified) llvm/test/Transforms/InstCombine/phi-equal-incoming-pointers.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/phi-timeout.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/phi.ll (+2-2)
  • (modified) llvm/test/Transforms/InstCombine/pr39908.ll (+3-3)
  • (modified) llvm/test/Transforms/InstCombine/pr44242.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/pr58901.ll (+4-3)
  • (modified) llvm/test/Transforms/InstCombine/ptr-replace-alloca.ll (+5-5)
  • (modified) llvm/test/Transforms/InstCombine/select-cmp-br.ll (+8-8)
  • (modified) llvm/test/Transforms/InstCombine/select-gep.ll (+8-8)
  • (modified) llvm/test/Transforms/InstCombine/shift.ll (+4-4)
  • (modified) llvm/test/Transforms/InstCombine/sink_sideeffecting_instruction.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/sprintf-2.ll (+8-8)
  • (modified) llvm/test/Transforms/InstCombine/statepoint-cleanup.ll (+4-4)
  • (modified) llvm/test/Transforms/InstCombine/str-int-3.ll (+23-23)
  • (modified) llvm/test/Transforms/InstCombine/str-int-4.ll (+34-34)
  • (modified) llvm/test/Transforms/InstCombine/str-int-5.ll (+27-27)
  • (modified) llvm/test/Transforms/InstCombine/str-int.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/strcall-bad-sig.ll (+4-4)
  • (modified) llvm/test/Transforms/InstCombine/strcall-no-nul.ll (+13-13)
  • (modified) llvm/test/Transforms/InstCombine/strlen-7.ll (+20-20)
  • (modified) llvm/test/Transforms/InstCombine/strlen-9.ll (+6-6)
  • (modified) llvm/test/Transforms/InstCombine/strncmp-4.ll (+14-14)
  • (modified) llvm/test/Transforms/InstCombine/strncmp-5.ll (+21-21)
  • (modified) llvm/test/Transforms/InstCombine/strncmp-6.ll (+6-6)
  • (modified) llvm/test/Transforms/InstCombine/sub.ll (+3-3)
  • (modified) llvm/test/Transforms/InstCombine/unpack-fca.ll (+27-27)
  • (modified) llvm/test/Transforms/InstCombine/vec_demanded_elts-inseltpoison.ll (+2-2)
  • (modified) llvm/test/Transforms/InstCombine/vec_demanded_elts.ll (+2-2)
  • (modified) llvm/test/Transforms/InstCombine/vec_gep_scalar_arg-inseltpoison.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/vec_gep_scalar_arg.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/vscale_gep.ll (+1-1)
  • (modified) llvm/test/Transforms/InstCombine/wcslen-5.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopUnroll/ARM/upperbound.ll (+2-2)
  • (modified) llvm/test/Transforms/LoopUnroll/peel-loop.ll (+8-8)
  • (modified) llvm/test/Transforms/LoopVectorize/AArch64/deterministic-type-shrinkage.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopVectorize/AArch64/intrinsiccost.ll (+4-4)
  • (modified) llvm/test/Transforms/LoopVectorize/AArch64/sve-cond-inv-loads.ll (+2-2)
  • (modified) llvm/test/Transforms/LoopVectorize/AArch64/sve-interleaved-accesses.ll (+8-8)
  • (modified) llvm/test/Transforms/LoopVectorize/AArch64/sve-widen-phi.ll (+6-6)
  • (modified) llvm/test/Transforms/LoopVectorize/AArch64/vector-reverse-mask4.ll (+4-4)
  • (modified) llvm/test/Transforms/LoopVectorize/AMDGPU/packed-math.ll (+22-22)
  • (modified) llvm/test/Transforms/LoopVectorize/ARM/mve-qabs.ll (+9-9)
  • (modified) llvm/test/Transforms/LoopVectorize/ARM/mve-reductions.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopVectorize/ARM/mve-selectandorcost.ll (+3-3)
  • (modified) llvm/test/Transforms/LoopVectorize/ARM/pointer_iv.ll (+24-24)
  • (modified) llvm/test/Transforms/LoopVectorize/X86/float-induction-x86.ll (+9-9)
  • (modified) llvm/test/Transforms/LoopVectorize/X86/interleaving.ll (+7-7)
  • (modified) llvm/test/Transforms/LoopVectorize/X86/intrinsiccost.ll (+8-8)
  • (modified) llvm/test/Transforms/LoopVectorize/X86/invariant-store-vectorization.ll (+3-3)
  • (modified) llvm/test/Transforms/LoopVectorize/X86/metadata-enable.ll (+412-412)
  • (modified) llvm/test/Transforms/LoopVectorize/X86/pr23997.ll (+6-6)
  • (modified) llvm/test/Transforms/LoopVectorize/X86/small-size.ll (+4-4)
  • (modified) llvm/test/Transforms/LoopVectorize/X86/x86-interleaved-store-accesses-with-gaps.ll (+2-2)
  • (modified) llvm/test/Transforms/LoopVectorize/consecutive-ptr-uniforms.ll (+2-2)
  • (modified) llvm/test/Transforms/LoopVectorize/extract-last-veclane.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopVectorize/float-induction.ll (+8-8)
  • (modified) llvm/test/Transforms/LoopVectorize/induction.ll (+26-26)
  • (modified) llvm/test/Transforms/LoopVectorize/interleaved-accesses.ll (+10-10)
  • (modified) llvm/test/Transforms/LoopVectorize/reduction-inloop-uf4.ll (+3-3)
  • (modified) llvm/test/Transforms/LoopVectorize/runtime-check.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopVectorize/scalar_after_vectorization.ll (+1-1)
  • (modified) llvm/test/Transforms/LoopVectorize/vector-geps.ll (+7-7)
  • (modified) llvm/test/Transforms/LowerMatrixIntrinsics/multiply-fused-dominance.ll (+56-56)
  • (modified) llvm/test/Transforms/LowerMatrixIntrinsics/multiply-fused-loops.ll (+12-12)
  • (modified) llvm/test/Transforms/LowerMatrixIntrinsics/multiply-fused-multiple-blocks.ll (+36-36)
  • (modified) llvm/test/Transforms/LowerMatrixIntrinsics/multiply-fused.ll (+67-67)
  • (modified) llvm/test/Transforms/LowerMatrixIntrinsics/multiply-minimal.ll (+5-5)
  • (modified) llvm/test/Transforms/PhaseOrdering/AArch64/hoisting-sinking-required-for-vectorization.ll (+13-13)
  • (modified) llvm/test/Transforms/PhaseOrdering/AArch64/peel-multiple-unreachable-exits-for-vectorization.ll (+11-11)
  • (modified) llvm/test/Transforms/PhaseOrdering/AArch64/quant_4x4.ll (+48-48)
  • (modified) llvm/test/Transforms/PhaseOrdering/AArch64/sinking-vs-if-conversion.ll (+4-4)
  • (modified) llvm/test/Transforms/PhaseOrdering/ARM/arm_mult_q15.ll (+6-6)
  • (modified) llvm/test/Transforms/PhaseOrdering/X86/excessive-unrolling.ll (+3-3)
  • (modified) llvm/test/Transforms/PhaseOrdering/X86/hoist-load-of-baseptr.ll (+5-5)
  • (modified) llvm/test/Transforms/PhaseOrdering/X86/pixel-splat.ll (+1-1)
  • (modified) llvm/test/Transforms/PhaseOrdering/X86/pr48844-br-to-switch-vectorization.ll (+1-1)
  • (modified) llvm/test/Transforms/PhaseOrdering/X86/pr50555.ll (+2-2)
  • (modified) llvm/test/Transforms/PhaseOrdering/X86/speculation-vs-tbaa.ll (+1-1)
  • (modified) llvm/test/Transforms/PhaseOrdering/X86/spurious-peeling.ll (+6-6)
  • (modified) llvm/test/Transforms/PhaseOrdering/X86/vdiv.ll (+6-6)
  • (modified) llvm/test/Transforms/PhaseOrdering/X86/vec-shift.ll (+4-4)
  • (modified) llvm/test/Transforms/PhaseOrdering/basic.ll (+4-4)
  • (modified) llvm/test/Transforms/PhaseOrdering/loop-access-checks.ll (+3-3)
  • (modified) llvm/test/Transforms/PhaseOrdering/pr39282.ll (+2-2)
  • (modified) llvm/test/Transforms/PhaseOrdering/simplifycfg-options.ll (+1-1)
  • (modified) llvm/test/Transforms/PhaseOrdering/switch_with_geps.ll (+4-52)
  • (modified) llvm/test/Transforms/SLPVectorizer/AArch64/gather-cost.ll (+6-6)
  • (modified) llvm/test/Transforms/SLPVectorizer/AArch64/gather-reduce.ll (+4-4)
  • (modified) llvm/test/Transforms/SLPVectorizer/AArch64/loadorder.ll (+16-16)
  • (modified) llvm/test/Transforms/SLPVectorizer/WebAssembly/no-vectorize-rotate.ll (+1-1)
  • (modified) llvm/test/Transforms/SLPVectorizer/X86/operandorder.ll (+2-2)
  • (modified) llvm/test/Transforms/SLPVectorizer/X86/opt.ll (+7-7)
  • (modified) llvm/test/Transforms/SLPVectorizer/X86/pr46983.ll (+16-16)
  • (modified) llvm/test/Transforms/SLPVectorizer/X86/pr47629-inseltpoison.ll (+136-136)
  • (modified) llvm/test/Transforms/SLPVectorizer/X86/pr47629.ll (+136-136)
  • (modified) llvm/test/Transforms/SampleProfile/pseudo-probe-instcombine.ll (+5-5)
  • (modified) llvm/test/Transforms/Util/strip-gc-relocates.ll (+2-2)
diff --git a/clang/test/CodeGen/PowerPC/builtins-ppc-pair-mma.c b/clang/test/CodeGen/PowerPC/builtins-ppc-pair-mma.c
index 3922513e22469a..5422d993ff1575 100644
--- a/clang/test/CodeGen/PowerPC/builtins-ppc-pair-mma.c
+++ b/clang/test/CodeGen/PowerPC/builtins-ppc-pair-mma.c
@@ -25,13 +25,13 @@ void test1(unsigned char *vqp, unsigned char *vpp, vector unsigned char vc, unsi
 // CHECK-NEXT:    [[TMP2:%.*]] = extractvalue { <16 x i8>, <16 x i8>, <16 x i8>, <16 x i8> } [[TMP1]], 0
 // CHECK-NEXT:    store <16 x i8> [[TMP2]], ptr [[RESP:%.*]], align 16
 // CHECK-NEXT:    [[TMP3:%.*]] = extractvalue { <16 x i8>, <16 x i8>, <16 x i8>, <16 x i8> } [[TMP1]], 1
-// CHECK-NEXT:    [[TMP4:%.*]] = getelementptr inbounds <16 x i8>, ptr [[RESP]], i64 1
+// CHECK-NEXT:    [[TMP4:%.*]] = getelementptr inbounds i8, ptr [[RESP]], i64 16
 // CHECK-NEXT:    store <16 x i8> [[TMP3]], ptr [[TMP4]], align 16
 // CHECK-NEXT:    [[TMP5:%.*]] = extractvalue { <16 x i8>, <16 x i8>, <16 x i8>, <16 x i8> } [[TMP1]], 2
-// CHECK-NEXT:    [[TMP6:%.*]] = getelementptr inbounds <16 x i8>, ptr [[RESP]], i64 2
+// CHECK-NEXT:    [[TMP6:%.*]] = getelementptr inbounds i8, ptr [[RESP]], i64 32
 // CHECK-NEXT:    store <16 x i8> [[TMP5]], ptr [[TMP6]], align 16
 // CHECK-NEXT:    [[TMP7:%.*]] = extractvalue { <16 x i8>, <16 x i8>, <16 x i8>, <16 x i8> } [[TMP1]], 3
-// CHECK-NEXT:    [[TMP8:%.*]] = getelementptr inbounds <16 x i8>, ptr [[RESP]], i64 3
+// CHECK-NEXT:    [[TMP8:%.*]] = getelementptr inbounds i8, ptr [[RESP]], i64 48
 // CHECK-NEXT:    store <16 x i8> [[TMP7]], ptr [[TMP8]], align 16
 // CHECK-NEXT:    ret void
 //
@@ -60,7 +60,7 @@ void test3(unsigned char *vqp, unsigned char *vpp, vector unsigned char vc, unsi
 // CHECK-NEXT:    [[TMP2:%.*]] = extractvalue { <16 x i8>, <16 x i8> } [[TMP1]], 0
 // CHECK-NEXT:    store <16 x i8> [[TMP2]], ptr [[RESP:%.*]], align 16
 // CHECK-NEXT:    [[TMP3:%.*]] = extractvalue { <16 x i8>, <16 x i8> } [[TMP1]], 1
-// CHECK-NEXT:    [[TMP4:%.*]] = getelementptr inbounds <16 x i8>, ptr [[RESP]], i64 1
+// CHECK-NEXT:    [[TMP4:%.*]] = getelementptr inbounds i8, ptr [[RESP]], i64 16
 // CHECK-NEXT:    store <16 x i8> [[TMP3]], ptr [[TMP4]], align 16
 // CHECK-NEXT:    ret void
 //
@@ -1072,7 +1072,7 @@ void test76(unsigned char *vqp, unsigned char *vpp, vector unsigned char vc, uns
 // CHECK-NEXT:    [[TMP2:%.*]] = extractvalue { <16 x i8>, <16 x i8> } [[TMP1]], 0
 // CHECK-NEXT:    store <16 x i8> [[TMP2]], ptr [[RESP:%.*]], align 16
 // CHECK-NEXT:    [[TMP3:%.*]] = extractvalue { <16 x i8>, <16 x i8> } [[TMP1]], 1
-// CHECK-NEXT:    [[TMP4:%.*]] = getelementptr inbounds <16 x i8>, ptr [[RESP]], i64 1
+// CHECK-NEXT:    [[TMP4:%.*]] = getelementptr inbounds i8, ptr [[RESP]], i64 16
 // CHECK-NEXT:    store <16 x i8> [[TMP3]], ptr [[TMP4]], align 16
 // CHECK-NEXT:    ret void
 //
diff --git a/clang/test/CodeGen/aarch64-ls64-inline-asm.c b/clang/test/CodeGen/aarch64-ls64-inline-asm.c
index ac2dbe1fa1b31a..744d6919b05ee4 100644
--- a/clang/test/CodeGen/aarch64-ls64-inline-asm.c
+++ b/clang/test/CodeGen/aarch64-ls64-inline-asm.c
@@ -16,8 +16,8 @@ void load(struct foo *output, void *addr)
 
 // CHECK-LABEL: @store(
 // CHECK-NEXT:  entry:
-// CHECK-NEXT:    [[TMP1:%.*]] = load i512, ptr [[INPUT:%.*]], align 8
-// CHECK-NEXT:    tail call void asm sideeffect "st64b $0,[$1]", "r,r,~{memory}"(i512 [[TMP1]], ptr [[ADDR:%.*]]) #[[ATTR1]], !srcloc !3
+// CHECK-NEXT:    [[TMP0:%.*]] = load i512, ptr [[INPUT:%.*]], align 8
+// CHECK-NEXT:    tail call void asm sideeffect "st64b $0,[$1]", "r,r,~{memory}"(i512 [[TMP0]], ptr [[ADDR:%.*]]) #[[ATTR1]], !srcloc !3
 // CHECK-NEXT:    ret void
 //
 void store(const struct foo *input, void *addr)
@@ -29,25 +29,25 @@ void store(const struct foo *input, void *addr)
 // CHECK-NEXT:  entry:
 // CHECK-NEXT:    [[TMP0:%.*]] = load i32, ptr [[IN:%.*]], align 4, !tbaa [[TBAA4:![0-9]+]]
 // CHECK-NEXT:    [[CONV:%.*]] = sext i32 [[TMP0]] to i64
-// CHECK-NEXT:    [[ARRAYIDX1:%.*]] = getelementptr inbounds i32, ptr [[IN]], i64 1
+// CHECK-NEXT:    [[ARRAYIDX1:%.*]] = getelementptr inbounds i8, ptr [[IN]], i64 4
 // CHECK-NEXT:    [[TMP1:%.*]] = load i32, ptr [[ARRAYIDX1]], align 4, !tbaa [[TBAA4]]
 // CHECK-NEXT:    [[CONV2:%.*]] = sext i32 [[TMP1]] to i64
-// CHECK-NEXT:    [[ARRAYIDX4:%.*]] = getelementptr inbounds i32, ptr [[IN]], i64 4
+// CHECK-NEXT:    [[ARRAYIDX4:%.*]] = getelementptr inbounds i8, ptr [[IN]], i64 16
 // CHECK-NEXT:    [[TMP2:%.*]] = load i32, ptr [[ARRAYIDX4]], align 4, !tbaa [[TBAA4]]
 // CHECK-NEXT:    [[CONV5:%.*]] = sext i32 [[TMP2]] to i64
-// CHECK-NEXT:    [[ARRAYIDX7:%.*]] = getelementptr inbounds i32, ptr [[IN]], i64 16
+// CHECK-NEXT:    [[ARRAYIDX7:%.*]] = getelementptr inbounds i8, ptr [[IN]], i64 64
 // CHECK-NEXT:    [[TMP3:%.*]] = load i32, ptr [[ARRAYIDX7]], align 4, !tbaa [[TBAA4]]
 // CHECK-NEXT:    [[CONV8:%.*]] = sext i32 [[TMP3]] to i64
-// CHECK-NEXT:    [[ARRAYIDX10:%.*]] = getelementptr inbounds i32, ptr [[IN]], i64 25
+// CHECK-NEXT:    [[ARRAYIDX10:%.*]] = getelementptr inbounds i8, ptr [[IN]], i64 100
 // CHECK-NEXT:    [[TMP4:%.*]] = load i32, ptr [[ARRAYIDX10]], align 4, !tbaa [[TBAA4]]
 // CHECK-NEXT:    [[CONV11:%.*]] = sext i32 [[TMP4]] to i64
-// CHECK-NEXT:    [[ARRAYIDX13:%.*]] = getelementptr inbounds i32, ptr [[IN]], i64 36
+// CHECK-NEXT:    [[ARRAYIDX13:%.*]] = getelementptr inbounds i8, ptr [[IN]], i64 144
 // CHECK-NEXT:    [[TMP5:%.*]] = load i32, ptr [[ARRAYIDX13]], align 4, !tbaa [[TBAA4]]
 // CHECK-NEXT:    [[CONV14:%.*]] = sext i32 [[TMP5]] to i64
-// CHECK-NEXT:    [[ARRAYIDX16:%.*]] = getelementptr inbounds i32, ptr [[IN]], i64 49
+// CHECK-NEXT:    [[ARRAYIDX16:%.*]] = getelementptr inbounds i8, ptr [[IN]], i64 196
 // CHECK-NEXT:    [[TMP6:%.*]] = load i32, ptr [[ARRAYIDX16]], align 4, !tbaa [[TBAA4]]
 // CHECK-NEXT:    [[CONV17:%.*]] = sext i32 [[TMP6]] to i64
-// CHECK-NEXT:    [[ARRAYIDX19:%.*]] = getelementptr inbounds i32, ptr [[IN]], i64 64
+// CHECK-NEXT:    [[ARRAYIDX19:%.*]] = getelementptr inbounds i8, ptr [[IN]], i64 256
 // CHECK-NEXT:    [[TMP7:%.*]] = load i32, ptr [[ARRAYIDX19]], align 4, !tbaa [[TBAA4]]
 // CHECK-NEXT:    [[CONV20:%.*]] = sext i32 [[TMP7]] to i64
 // CHECK-NEXT:    [[S_SROA_10_0_INSERT_EXT:%.*]] = zext i64 [[CONV20]] to i512
diff --git a/clang/test/CodeGen/attr-arm-sve-vector-bits-bitcast.c b/clang/test/CodeGen/attr-arm-sve-vector-bits-bitcast.c
index 22e2e0c2ff102d..323afb64591249 100644
--- a/clang/test/CodeGen/attr-arm-sve-vector-bits-bitcast.c
+++ b/clang/test/CodeGen/attr-arm-sve-vector-bits-bitcast.c
@@ -30,21 +30,21 @@ DEFINE_STRUCT(bool)
 
 // CHECK-128-LABEL: @read_int64(
 // CHECK-128-NEXT:  entry:
-// CHECK-128-NEXT:    [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_INT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-128-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 16
 // CHECK-128-NEXT:    [[TMP0:%.*]] = load <2 x i64>, ptr [[Y]], align 16, !tbaa [[TBAA2:![0-9]+]]
 // CHECK-128-NEXT:    [[CAST_SCALABLE:%.*]] = tail call <vscale x 2 x i64> @llvm.vector.insert.nxv2i64.v2i64(<vscale x 2 x i64> undef, <2 x i64> [[TMP0]], i64 0)
 // CHECK-128-NEXT:    ret <vscale x 2 x i64> [[CAST_SCALABLE]]
 //
 // CHECK-256-LABEL: @read_int64(
 // CHECK-256-NEXT:  entry:
-// CHECK-256-NEXT:    [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_INT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-256-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 32
 // CHECK-256-NEXT:    [[TMP0:%.*]] = load <4 x i64>, ptr [[Y]], align 16, !tbaa [[TBAA2:![0-9]+]]
 // CHECK-256-NEXT:    [[CAST_SCALABLE:%.*]] = tail call <vscale x 2 x i64> @llvm.vector.insert.nxv2i64.v4i64(<vscale x 2 x i64> undef, <4 x i64> [[TMP0]], i64 0)
 // CHECK-256-NEXT:    ret <vscale x 2 x i64> [[CAST_SCALABLE]]
 //
 // CHECK-512-LABEL: @read_int64(
 // CHECK-512-NEXT:  entry:
-// CHECK-512-NEXT:    [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_INT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-512-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 64
 // CHECK-512-NEXT:    [[TMP0:%.*]] = load <8 x i64>, ptr [[Y]], align 16, !tbaa [[TBAA2:![0-9]+]]
 // CHECK-512-NEXT:    [[CAST_SCALABLE:%.*]] = tail call <vscale x 2 x i64> @llvm.vector.insert.nxv2i64.v8i64(<vscale x 2 x i64> undef, <8 x i64> [[TMP0]], i64 0)
 // CHECK-512-NEXT:    ret <vscale x 2 x i64> [[CAST_SCALABLE]]
@@ -56,21 +56,21 @@ svint64_t read_int64(struct struct_int64 *s) {
 // CHECK-128-LABEL: @write_int64(
 // CHECK-128-NEXT:  entry:
 // CHECK-128-NEXT:    [[CAST_FIXED:%.*]] = tail call <2 x i64> @llvm.vector.extract.v2i64.nxv2i64(<vscale x 2 x i64> [[X:%.*]], i64 0)
-// CHECK-128-NEXT:    [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_INT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-128-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 16
 // CHECK-128-NEXT:    store <2 x i64> [[CAST_FIXED]], ptr [[Y]], align 16, !tbaa [[TBAA2]]
 // CHECK-128-NEXT:    ret void
 //
 // CHECK-256-LABEL: @write_int64(
 // CHECK-256-NEXT:  entry:
 // CHECK-256-NEXT:    [[CAST_FIXED:%.*]] = tail call <4 x i64> @llvm.vector.extract.v4i64.nxv2i64(<vscale x 2 x i64> [[X:%.*]], i64 0)
-// CHECK-256-NEXT:    [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_INT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-256-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 32
 // CHECK-256-NEXT:    store <4 x i64> [[CAST_FIXED]], ptr [[Y]], align 16, !tbaa [[TBAA2]]
 // CHECK-256-NEXT:    ret void
 //
 // CHECK-512-LABEL: @write_int64(
 // CHECK-512-NEXT:  entry:
 // CHECK-512-NEXT:    [[CAST_FIXED:%.*]] = tail call <8 x i64> @llvm.vector.extract.v8i64.nxv2i64(<vscale x 2 x i64> [[X:%.*]], i64 0)
-// CHECK-512-NEXT:    [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_INT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-512-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 64
 // CHECK-512-NEXT:    store <8 x i64> [[CAST_FIXED]], ptr [[Y]], align 16, !tbaa [[TBAA2]]
 // CHECK-512-NEXT:    ret void
 //
@@ -84,21 +84,21 @@ void write_int64(struct struct_int64 *s, svint64_t x) {
 
 // CHECK-128-LABEL: @read_float64(
 // CHECK-128-NEXT:  entry:
-// CHECK-128-NEXT:    [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_FLOAT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-128-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 16
 // CHECK-128-NEXT:    [[TMP0:%.*]] = load <2 x double>, ptr [[Y]], align 16, !tbaa [[TBAA2]]
 // CHECK-128-NEXT:    [[CAST_SCALABLE:%.*]] = tail call <vscale x 2 x double> @llvm.vector.insert.nxv2f64.v2f64(<vscale x 2 x double> undef, <2 x double> [[TMP0]], i64 0)
 // CHECK-128-NEXT:    ret <vscale x 2 x double> [[CAST_SCALABLE]]
 //
 // CHECK-256-LABEL: @read_float64(
 // CHECK-256-NEXT:  entry:
-// CHECK-256-NEXT:    [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_FLOAT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-256-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 32
 // CHECK-256-NEXT:    [[TMP0:%.*]] = load <4 x double>, ptr [[Y]], align 16, !tbaa [[TBAA2]]
 // CHECK-256-NEXT:    [[CAST_SCALABLE:%.*]] = tail call <vscale x 2 x double> @llvm.vector.insert.nxv2f64.v4f64(<vscale x 2 x double> undef, <4 x double> [[TMP0]], i64 0)
 // CHECK-256-NEXT:    ret <vscale x 2 x double> [[CAST_SCALABLE]]
 //
 // CHECK-512-LABEL: @read_float64(
 // CHECK-512-NEXT:  entry:
-// CHECK-512-NEXT:    [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_FLOAT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-512-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 64
 // CHECK-512-NEXT:    [[TMP0:%.*]] = load <8 x double>, ptr [[Y]], align 16, !tbaa [[TBAA2]]
 // CHECK-512-NEXT:    [[CAST_SCALABLE:%.*]] = tail call <vscale x 2 x double> @llvm.vector.insert.nxv2f64.v8f64(<vscale x 2 x double> undef, <8 x double> [[TMP0]], i64 0)
 // CHECK-512-NEXT:    ret <vscale x 2 x double> [[CAST_SCALABLE]]
@@ -110,21 +110,21 @@ svfloat64_t read_float64(struct struct_float64 *s) {
 // CHECK-128-LABEL: @write_float64(
 // CHECK-128-NEXT:  entry:
 // CHECK-128-NEXT:    [[CAST_FIXED:%.*]] = tail call <2 x double> @llvm.vector.extract.v2f64.nxv2f64(<vscale x 2 x double> [[X:%.*]], i64 0)
-// CHECK-128-NEXT:    [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_FLOAT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-128-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 16
 // CHECK-128-NEXT:    store <2 x double> [[CAST_FIXED]], ptr [[Y]], align 16, !tbaa [[TBAA2]]
 // CHECK-128-NEXT:    ret void
 //
 // CHECK-256-LABEL: @write_float64(
 // CHECK-256-NEXT:  entry:
 // CHECK-256-NEXT:    [[CAST_FIXED:%.*]] = tail call <4 x double> @llvm.vector.extract.v4f64.nxv2f64(<vscale x 2 x double> [[X:%.*]], i64 0)
-// CHECK-256-NEXT:    [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_FLOAT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-256-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 32
 // CHECK-256-NEXT:    store <4 x double> [[CAST_FIXED]], ptr [[Y]], align 16, !tbaa [[TBAA2]]
 // CHECK-256-NEXT:    ret void
 //
 // CHECK-512-LABEL: @write_float64(
 // CHECK-512-NEXT:  entry:
 // CHECK-512-NEXT:    [[CAST_FIXED:%.*]] = tail call <8 x double> @llvm.vector.extract.v8f64.nxv2f64(<vscale x 2 x double> [[X:%.*]], i64 0)
-// CHECK-512-NEXT:    [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_FLOAT64:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-512-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 64
 // CHECK-512-NEXT:    store <8 x double> [[CAST_FIXED]], ptr [[Y]], align 16, !tbaa [[TBAA2]]
 // CHECK-512-NEXT:    ret void
 //
@@ -138,21 +138,21 @@ void write_float64(struct struct_float64 *s, svfloat64_t x) {
 
 // CHECK-128-LABEL: @read_bfloat16(
 // CHECK-128-NEXT:  entry:
-// CHECK-128-NEXT:    [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_BFLOAT16:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-128-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 16
 // CHECK-128-NEXT:    [[TMP0:%.*]] = load <8 x bfloat>, ptr [[Y]], align 16, !tbaa [[TBAA2]]
 // CHECK-128-NEXT:    [[CAST_SCALABLE:%.*]] = tail call <vscale x 8 x bfloat> @llvm.vector.insert.nxv8bf16.v8bf16(<vscale x 8 x bfloat> undef, <8 x bfloat> [[TMP0]], i64 0)
 // CHECK-128-NEXT:    ret <vscale x 8 x bfloat> [[CAST_SCALABLE]]
 //
 // CHECK-256-LABEL: @read_bfloat16(
 // CHECK-256-NEXT:  entry:
-// CHECK-256-NEXT:    [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_BFLOAT16:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-256-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 32
 // CHECK-256-NEXT:    [[TMP0:%.*]] = load <16 x bfloat>, ptr [[Y]], align 16, !tbaa [[TBAA2]]
 // CHECK-256-NEXT:    [[CAST_SCALABLE:%.*]] = tail call <vscale x 8 x bfloat> @llvm.vector.insert.nxv8bf16.v16bf16(<vscale x 8 x bfloat> undef, <16 x bfloat> [[TMP0]], i64 0)
 // CHECK-256-NEXT:    ret <vscale x 8 x bfloat> [[CAST_SCALABLE]]
 //
 // CHECK-512-LABEL: @read_bfloat16(
 // CHECK-512-NEXT:  entry:
-// CHECK-512-NEXT:    [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_BFLOAT16:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-512-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 64
 // CHECK-512-NEXT:    [[TMP0:%.*]] = load <32 x bfloat>, ptr [[Y]], align 16, !tbaa [[TBAA2]]
 // CHECK-512-NEXT:    [[CAST_SCALABLE:%.*]] = tail call <vscale x 8 x bfloat> @llvm.vector.insert.nxv8bf16.v32bf16(<vscale x 8 x bfloat> undef, <32 x bfloat> [[TMP0]], i64 0)
 // CHECK-512-NEXT:    ret <vscale x 8 x bfloat> [[CAST_SCALABLE]]
@@ -164,21 +164,21 @@ svbfloat16_t read_bfloat16(struct struct_bfloat16 *s) {
 // CHECK-128-LABEL: @write_bfloat16(
 // CHECK-128-NEXT:  entry:
 // CHECK-128-NEXT:    [[CAST_FIXED:%.*]] = tail call <8 x bfloat> @llvm.vector.extract.v8bf16.nxv8bf16(<vscale x 8 x bfloat> [[X:%.*]], i64 0)
-// CHECK-128-NEXT:    [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_BFLOAT16:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-128-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 16
 // CHECK-128-NEXT:    store <8 x bfloat> [[CAST_FIXED]], ptr [[Y]], align 16, !tbaa [[TBAA2]]
 // CHECK-128-NEXT:    ret void
 //
 // CHECK-256-LABEL: @write_bfloat16(
 // CHECK-256-NEXT:  entry:
 // CHECK-256-NEXT:    [[CAST_FIXED:%.*]] = tail call <16 x bfloat> @llvm.vector.extract.v16bf16.nxv8bf16(<vscale x 8 x bfloat> [[X:%.*]], i64 0)
-// CHECK-256-NEXT:    [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_BFLOAT16:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-256-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 32
 // CHECK-256-NEXT:    store <16 x bfloat> [[CAST_FIXED]], ptr [[Y]], align 16, !tbaa [[TBAA2]]
 // CHECK-256-NEXT:    ret void
 //
 // CHECK-512-LABEL: @write_bfloat16(
 // CHECK-512-NEXT:  entry:
 // CHECK-512-NEXT:    [[CAST_FIXED:%.*]] = tail call <32 x bfloat> @llvm.vector.extract.v32bf16.nxv8bf16(<vscale x 8 x bfloat> [[X:%.*]], i64 0)
-// CHECK-512-NEXT:    [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_BFLOAT16:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-512-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 64
 // CHECK-512-NEXT:    store <32 x bfloat> [[CAST_FIXED]], ptr [[Y]], align 16, !tbaa [[TBAA2]]
 // CHECK-512-NEXT:    ret void
 //
@@ -192,7 +192,7 @@ void write_bfloat16(struct struct_bfloat16 *s, svbfloat16_t x) {
 
 // CHECK-128-LABEL: @read_bool(
 // CHECK-128-NEXT:  entry:
-// CHECK-128-NEXT:    [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_BOOL:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-128-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 2
 // CHECK-128-NEXT:    [[TMP0:%.*]] = load <2 x i8>, ptr [[Y]], align 2, !tbaa [[TBAA2]]
 // CHECK-128-NEXT:    [[CAST_SCALABLE:%.*]] = tail call <vscale x 2 x i8> @llvm.vector.insert.nxv2i8.v2i8(<vscale x 2 x i8> undef, <2 x i8> [[TMP0]], i64 0)
 // CHECK-128-NEXT:    [[TMP1:%.*]] = bitcast <vscale x 2 x i8> [[CAST_SCALABLE]] to <vscale x 16 x i1>
@@ -200,7 +200,7 @@ void write_bfloat16(struct struct_bfloat16 *s, svbfloat16_t x) {
 //
 // CHECK-256-LABEL: @read_bool(
 // CHECK-256-NEXT:  entry:
-// CHECK-256-NEXT:    [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_BOOL:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-256-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 4
 // CHECK-256-NEXT:    [[TMP0:%.*]] = load <4 x i8>, ptr [[Y]], align 2, !tbaa [[TBAA2]]
 // CHECK-256-NEXT:    [[CAST_SCALABLE:%.*]] = tail call <vscale x 2 x i8> @llvm.vector.insert.nxv2i8.v4i8(<vscale x 2 x i8> undef, <4 x i8> [[TMP0]], i64 0)
 // CHECK-256-NEXT:    [[TMP1:%.*]] = bitcast <vscale x 2 x i8> [[CAST_SCALABLE]] to <vscale x 16 x i1>
@@ -208,7 +208,7 @@ void write_bfloat16(struct struct_bfloat16 *s, svbfloat16_t x) {
 //
 // CHECK-512-LABEL: @read_bool(
 // CHECK-512-NEXT:  entry:
-// CHECK-512-NEXT:    [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_BOOL:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-512-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 8
 // CHECK-512-NEXT:    [[TMP0:%.*]] = load <8 x i8>, ptr [[Y]], align 2, !tbaa [[TBAA2]]
 // CHECK-512-NEXT:    [[CAST_SCALABLE:%.*]] = tail call <vscale x 2 x i8> @llvm.vector.insert.nxv2i8.v8i8(<vscale x 2 x i8> undef, <8 x i8> [[TMP0]], i64 0)
 // CHECK-512-NEXT:    [[TMP1:%.*]] = bitcast <vscale x 2 x i8> [[CAST_SCALABLE]] to <vscale x 16 x i1>
@@ -222,7 +222,7 @@ svbool_t read_bool(struct struct_bool *s) {
 // CHECK-128-NEXT:  entry:
 // CHECK-128-NEXT:    [[TMP0:%.*]] = bitcast <vscale x 16 x i1> [[X:%.*]] to <vscale x 2 x i8>
 // CHECK-128-NEXT:    [[CAST_FIXED:%.*]] = tail call <2 x i8> @llvm.vector.extract.v2i8.nxv2i8(<vscale x 2 x i8> [[TMP0]], i64 0)
-// CHECK-128-NEXT:    [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_BOOL:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-128-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 2
 // CHECK-128-NEXT:    store <2 x i8> [[CAST_FIXED]], ptr [[Y]], align 2, !tbaa [[TBAA2]]
 // CHECK-128-NEXT:    ret void
 //
@@ -230,7 +230,7 @@ svbool_t read_bool(struct struct_bool *s) {
 // CHECK-256-NEXT:  entry:
 // CHECK-256-NEXT:    [[TMP0:%.*]] = bitcast <vscale x 16 x i1> [[X:%.*]] to <vscale x 2 x i8>
 // CHECK-256-NEXT:    [[CAST_FIXED:%.*]] = tail call <4 x i8> @llvm.vector.extract.v4i8.nxv2i8(<vscale x 2 x i8> [[TMP0]], i64 0)
-// CHECK-256-NEXT:    [[Y:%.*]] = getelementptr inbounds [[STRUCT_STRUCT_BOOL:%.*]], ptr [[S:%.*]], i64 0, i32 1
+// CHECK-256-NEXT:    [[Y:%.*]] = getelementptr inbounds i8, ptr [[S:%.*]], i64 4
 // CHECK-256-NEXT:...
[truncated]

@llvmbot llvmbot added the flang Flang issues not falling into any other category label Dec 20, 2023
Copy link
Contributor

@aeubanks aeubanks left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is a good direction to go, lgtm

Copy link
Contributor

@fhahn fhahn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, looks like a great first step! Will be interesting to see what kind of regressions this surfaces (if any)

@dtcxzyw
Copy link
Member

dtcxzyw commented Dec 21, 2023

@nikic Could you please have a look at dtcxzyw/llvm-opt-benchmark#17?
One regression:

diff --git a/bench/brotli/optimized/compound_dictionary.c.ll b/bench/brotli/optimized/compound_dictionary.c.ll
index 21fd37fd..b9894810 100644
--- a/bench/brotli/optimized/compound_dictionary.c.ll
+++ b/bench/brotli/optimized/compound_dictionary.c.ll
@@ -3,9 +3,6 @@ source_filename = "bench/brotli/original/compound_dictionary.c.ll"
 target datalayout = "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128"
 target triple = "x86_64-unknown-linux-gnu"
 
-%struct.PreparedDictionary = type { i32, i32, i32, i32, i32, i32 }
-%struct.CompoundDictionary = type { i64, i64, [16 x ptr], [16 x ptr], [16 x i64], i64, [16 x ptr] }
-
 ; Function Attrs: nounwind uwtable
 define hidden ptr @CreatePreparedDictionary(ptr noundef %m, ptr noundef %source, i64 noundef %source_size) local_unnamed_addr #0 {
 entry:
@@ -168,25 +165,29 @@ cond.true119.i:                                   ; preds = %for.end106.i
 
 cond.end123.i:                                    ; preds = %cond.true119.i, %for.end106.i
   %cond124.i = phi ptr [ %call121.i, %cond.true119.i ], [ null, %for.end106.i ]
-  %arrayidx125.i = getelementptr inbounds %struct.PreparedDictionary, ptr %cond124.i, i64 1
+  %arrayidx125.i = getelementptr inbounds i8, ptr %cond124.i, i64 24
   %arrayidx127.i = getelementptr inbounds i32, ptr %arrayidx125.i, i64 %idxprom.i
   %arrayidx129.i = getelementptr inbounds i16, ptr %arrayidx127.i, i64 %idxprom26.i
   %arrayidx131.i = getelementptr inbounds i32, ptr %arrayidx129.i, i64 %conv113.i
   store i32 -558043677, ptr %cond124.i, align 4
-  %num_items.i = getelementptr inbounds %struct.PreparedDictionary, ptr %cond124.i, i64 0, i32 1
+  %num_items.i = getelementptr inbounds i8, ptr %cond124.i, i64 4
   store i32 %add100.i, ptr %num_items.i, align 4
   %conv132.i = trunc i64 %source_size to i32
-  %source_size133.i = getelementptr inbounds %struct.PreparedDictionary, ptr %cond124.i, i64 0, i32 2
+  %source_size133.i = getelementptr inbounds i8, ptr %cond124.i, i64 8
   store i32 %conv132.i, ptr %source_size133.i, align 4
-  %hash_bits134.i = getelementptr inbounds %struct.PreparedDictionary, ptr %cond124.i, i64 0, i32 3
+  %hash_bits134.i = getelementptr inbounds i8, ptr %cond124.i, i64 12
   store i32 40, ptr %hash_bits134.i, align 4
-  %bucket_bits135.i = getelementptr inbounds %struct.PreparedDictionary, ptr %cond124.i, i64 0, i32 4
+  %bucket_bits135.i = getelementptr inbounds i8, ptr %cond124.i, i64 16
   store i32 %bucket_bits.0.lcssa, ptr %bucket_bits135.i, align 4
-  %slot_bits136.i = getelementptr inbounds %struct.PreparedDictionary, ptr %cond124.i, i64 0, i32 5
+  %slot_bits136.i = getelementptr inbounds i8, ptr %cond124.i, i64 20
   store i32 %slot_bits.0.lcssa, ptr %slot_bits136.i, align 4
   store ptr %source, ptr %arrayidx131.i, align 1
   br label %for.body140.i
 
+for.cond151.preheader.i:                          ; preds = %for.body140.i
+  %invariant.gep.i = getelementptr i8, ptr %arrayidx129.i, i64 -4
+  br label %for.body154.i
+
 for.body140.i:                                    ; preds = %for.body140.i, %cond.end123.i
   %indvars.iv145.i = phi i64 [ 0, %cond.end123.i ], [ %indvars.iv.next146.i, %for.body140.i ]
   %total_items.1139.i = phi i32 [ 0, %cond.end123.i ], [ %add145.i, %for.body140.i ]
@@ -198,10 +199,10 @@ for.body140.i:                                    ; preds = %for.body140.i, %con
   store i32 0, ptr %arrayidx144.i, align 4
   %indvars.iv.next146.i = add nuw nsw i64 %indvars.iv145.i, 1
   %exitcond150.not.i = icmp eq i64 %indvars.iv.next146.i, %idxprom.i
-  br i1 %exitcond150.not.i, label %for.body154.i, label %for.body140.i, !llvm.loop !9
+  br i1 %exitcond150.not.i, label %for.cond151.preheader.i, label %for.body140.i, !llvm.loop !9
 
-for.body154.i:                                    ; preds = %for.body140.i, %for.inc204.i
-  %indvars.iv152.i = phi i64 [ %indvars.iv.next153.i, %for.inc204.i ], [ 0, %for.body140.i ]
+for.body154.i:                                    ; preds = %for.inc204.i, %for.cond151.preheader.i
+  %indvars.iv152.i = phi i64 [ 0, %for.cond151.preheader.i ], [ %indvars.iv.next153.i, %for.inc204.i ]
   %5 = trunc i64 %indvars.iv152.i to i32
   %and155.i = and i32 %sub3.i, %5
   %arrayidx158.i = getelementptr inbounds i16, ptr %arrayidx25.i, i64 %indvars.iv152.i
@@ -243,7 +244,7 @@ for.body194.i:                                    ; preds = %for.body194.i, %if.
   %pos.0.in140.i = phi ptr [ %arrayidx189.i, %if.end177.i ], [ %arrayidx198.i, %for.body194.i ]
   %pos.0.i = load i32, ptr %pos.0.in140.i, align 4
   %inc195.i = add nuw nsw i64 %cursor.0142.i, 1
-  %arrayidx196.i = getelementptr i32, ptr %arrayidx129.i, i64 %cursor.0142.i
+  %arrayidx196.i = getelementptr inbounds i32, ptr %arrayidx129.i, i64 %cursor.0142.i
   store i32 %pos.0.i, ptr %arrayidx196.i, align 4
   %idxprom197.i = zext i32 %pos.0.i to i64
   %arrayidx198.i = getelementptr inbounds i32, ptr %arrayidx29.i, i64 %idxprom197.i
@@ -252,9 +253,9 @@ for.body194.i:                                    ; preds = %for.body194.i, %if.
   br i1 %exitcond151.not.i, label %for.end201.i, label %for.body194.i, !llvm.loop !10
 
 for.end201.i:                                     ; preds = %for.body194.i
-  %arrayidx196.i.le = getelementptr i32, ptr %arrayidx129.i, i64 %cursor.0142.i
+  %gep.i = getelementptr i32, ptr %invariant.gep.i, i64 %inc195.i
   %or.i = or i32 %pos.0.i, -2147483648
-  store i32 %or.i, ptr %arrayidx196.i.le, align 4
+  store i32 %or.i, ptr %gep.i, align 4
   br label %for.inc204.i
 
 for.inc204.i:                                     ; preds = %for.end201.i, %if.then174.i

Alive2: https://alive2.llvm.org/ce/z/JfN5sB

I will post a patch later.

@nikic
Copy link
Contributor Author

nikic commented Dec 21, 2023

@dtcxzyw GitHub can't display the diff, and struggles to clone the repo. Can you share the diffs for just the mentioned files?

@dtcxzyw
Copy link
Member

dtcxzyw commented Dec 21, 2023

Another example:

diff --git a/bench/hermes/optimized/Sorting.cpp.ll b/bench/hermes/optimized/Sorting.cpp.ll
index 1a808c47..e03089ca 100644
--- a/bench/hermes/optimized/Sorting.cpp.ll
+++ b/bench/hermes/optimized/Sorting.cpp.ll
@@ -41,20 +41,22 @@ if.end:                                           ; preds = %entry
   %call5.i.i.i.i.i.i = tail call noalias noundef nonnull ptr @_Znwm(i64 noundef %mul.i.i.i.i.i.i) #9
   store ptr %call5.i.i.i.i.i.i, ptr %index, align 8
   %add.ptr.i.i.i = getelementptr inbounds i32, ptr %call5.i.i.i.i.i.i, i64 %conv
-  %_M_end_of_storage.i.i.i = getelementptr inbounds %"struct.std::_Vector_base<unsigned int, std::allocator<unsigned int>>::_Vector_impl_data", ptr %index, i64 0, i32 2
+  %_M_end_of_storage.i.i.i = getelementptr inbounds i8, ptr %index, i64 16
   store ptr %add.ptr.i.i.i, ptr %_M_end_of_storage.i.i.i, align 8
   store i32 0, ptr %call5.i.i.i.i.i.i, align 4
-  %incdec.ptr.i.i.i.i.i = getelementptr i32, ptr %call5.i.i.i.i.i.i, i64 1
-  %cmp.i.i.i.i.i.i.i = icmp eq i32 %sub, 1
+  %incdec.ptr.i.i.i.i.i = getelementptr i8, ptr %call5.i.i.i.i.i.i, i64 4
+  %sub.i.i.i.i.i = add nsw i64 %conv, -1
+  %cmp.i.i.i.i.i.i.i = icmp eq i64 %sub.i.i.i.i.i, 0
   br i1 %cmp.i.i.i.i.i.i.i, label %_ZNSt6vectorIjSaIjEEC2EmRKS0_.exit, label %if.end.i.i.i.i.i.i.i
 
 if.end.i.i.i.i.i.i.i:                             ; preds = %if.end
   %1 = add nsw i64 %mul.i.i.i.i.i.i, -4
   tail call void @llvm.memset.p0.i64(ptr align 4 %incdec.ptr.i.i.i.i.i, i8 0, i64 %1, i1 false)
+  %add.ptr.i.i.i.i.i.i.i = getelementptr inbounds i32, ptr %incdec.ptr.i.i.i.i.i, i64 %sub.i.i.i.i.i
   br label %_ZNSt6vectorIjSaIjEEC2EmRKS0_.exit
 
 _ZNSt6vectorIjSaIjEEC2EmRKS0_.exit:               ; preds = %if.end, %if.end.i.i.i.i.i.i.i
-  %__first.addr.0.i.i.i.i.i = phi ptr [ %incdec.ptr.i.i.i.i.i, %if.end ], [ %add.ptr.i.i.i, %if.end.i.i.i.i.i.i.i ]
+  %__first.addr.0.i.i.i.i.i = phi ptr [ %incdec.ptr.i.i.i.i.i, %if.end ], [ %add.ptr.i.i.i.i.i.i.i, %if.end.i.i.i.i.i.i.i ]
   store ptr %__first.addr.0.i.i.i.i.i, ptr %0, align 8
   %cmp116.not = icmp eq i32 %end, %begin
   br i1 %cmp116.not, label %for.end, label %for.body

@dtcxzyw
Copy link
Member

dtcxzyw commented Dec 21, 2023

@dtcxzyw GitHub can't display the diff, and struggles to clone the repo. Can you share the diffs for just the mentioned files?

I have posted the diff between optimized IRs.

@Prabhuk
Copy link
Contributor

Prabhuk commented Feb 28, 2024

I am debugging an assertion failure regression in one of our projects that uses the FlatMap data structure implemented here:

https://pigweed.googlesource.com/pigweed/pigweed/+/refs/heads/main/pw_containers/public/pw_containers/flat_map.h#180

The regression was that the find function above failed to return the value that's stored in the Map for a valid key.

A bit of debugging and git bisect of LLVM commits pointed me to this commit 90ba330#diff-0ffb024c6c026c4b93507d08e6d24dccf98980023130781164b71586bc747f2d as the source of failure. Reverting this commit locally fixes the regression.

I am working on generating a simple reproducer.

bors added a commit to rust-lang-ci/rust that referenced this pull request Mar 2, 2024
Always generate GEP i8 / ptradd for struct offsets

This implements rust-lang#98615, and goes a bit further to remove `struct_gep` entirely.

Upstream LLVM is in the beginning stages of [migrating to `ptradd`](https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699). LLVM 19 will [canonicalize](llvm/llvm-project#68882) all constant-offset GEPs to i8, which has roughly the same effect as this change.

Fixes rust-lang#121719.

Split out from rust-lang#121577.

r? `@nikic`
bors added a commit to rust-lang-ci/rust that referenced this pull request Mar 3, 2024
Always generate GEP i8 / ptradd for struct offsets

This implements rust-lang#98615, and goes a bit further to remove `struct_gep` entirely.

Upstream LLVM is in the beginning stages of [migrating to `ptradd`](https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699). LLVM 19 will [canonicalize](llvm/llvm-project#68882) all constant-offset GEPs to i8, which has roughly the same effect as this change.

Fixes rust-lang#121719.

Split out from rust-lang#121577.

r? `@nikic`
github-actions bot pushed a commit to rust-lang/miri that referenced this pull request Mar 4, 2024
Always generate GEP i8 / ptradd for struct offsets

This implements #98615, and goes a bit further to remove `struct_gep` entirely.

Upstream LLVM is in the beginning stages of [migrating to `ptradd`](https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699). LLVM 19 will [canonicalize](llvm/llvm-project#68882) all constant-offset GEPs to i8, which has roughly the same effect as this change.

Fixes #121719.

Split out from #121577.

r? `@nikic`
antoyo pushed a commit to rust-lang/rustc_codegen_gcc that referenced this pull request Mar 5, 2024
Always generate GEP i8 / ptradd for struct offsets

This implements #98615, and goes a bit further to remove `struct_gep` entirely.

Upstream LLVM is in the beginning stages of [migrating to `ptradd`](https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699). LLVM 19 will [canonicalize](llvm/llvm-project#68882) all constant-offset GEPs to i8, which has roughly the same effect as this change.

Fixes #121719.

Split out from #121577.

r? `@nikic`
@Prabhuk
Copy link
Contributor

Prabhuk commented Mar 15, 2024

I am debugging an assertion failure regression in one of our projects that uses the FlatMap data structure implemented here:

https://pigweed.googlesource.com/pigweed/pigweed/+/refs/heads/main/pw_containers/public/pw_containers/flat_map.h#180

The regression was that the find function above failed to return the value that's stored in the Map for a valid key.

A bit of debugging and git bisect of LLVM commits pointed me to this commit 90ba330#diff-0ffb024c6c026c4b93507d08e6d24dccf98980023130781164b71586bc747f2d as the source of failure. Reverting this commit locally fixes the regression.

I am working on generating a simple reproducer.

This bug is the actual root cause of the failure: #85333

lnicola pushed a commit to lnicola/rust-analyzer that referenced this pull request Apr 7, 2024
Always generate GEP i8 / ptradd for struct offsets

This implements #98615, and goes a bit further to remove `struct_gep` entirely.

Upstream LLVM is in the beginning stages of [migrating to `ptradd`](https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699). LLVM 19 will [canonicalize](llvm/llvm-project#68882) all constant-offset GEPs to i8, which has roughly the same effect as this change.

Fixes #121719.

Split out from #121577.

r? `@nikic`
hiraditya added a commit to hiraditya/llvm-project that referenced this pull request Apr 11, 2024
As mentioned in llvm#68882 and https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699

Gep arithmetic isn't consistent with different types.
GVNSink didn't realize this and sank all geps
as long as their operands can be wired via PHIs
in a post-dominator.
hiraditya added a commit to hiraditya/llvm-project that referenced this pull request Apr 12, 2024
As mentioned in llvm#68882 and https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699

Gep arithmetic isn't consistent with different types.
GVNSink didn't realize this and sank all geps
as long as their operands can be wired via PHIs
in a post-dominator.
nikic added a commit to nikic/llvm-project that referenced this pull request Apr 24, 2024
This patch canonicalizes constant expression GEPs to use i8 source
element type, aka ptradd. This is the ConstantFolding equivalent of
the InstCombine canonicalization introduced in llvm#68882.

I believe all our optimizations working on constant expression GEPs
(like GlobalOpt etc) have already been switched to work on offsets,
so I don't expect any significant fallout from this change.

This is part of:
https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699
dtcxzyw added a commit that referenced this pull request Apr 25, 2024
…into `gep T, (gep i8, base, C1 + C2 * sizeof(T)), Index` (#76177)

This patch tries to canonicalize `gep T, (gep i8, base, C1), (Index +
C2)` into `gep T, (gep i8, base, C1 + C2 * sizeof(T)), Index`.

Alive2: https://alive2.llvm.org/ce/z/dxShKF
Fixes regressions found in
#68882.
nikic added a commit to nikic/llvm-project that referenced this pull request Apr 26, 2024
This patch canonicalizes constant expression GEPs to use i8 source
element type, aka ptradd. This is the ConstantFolding equivalent of
the InstCombine canonicalization introduced in llvm#68882.

I believe all our optimizations working on constant expression GEPs
(like GlobalOpt etc) have already been switched to work on offsets,
so I don't expect any significant fallout from this change.

This is part of:
https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699
hiraditya added a commit to hiraditya/llvm-project that referenced this pull request Apr 26, 2024
As mentioned in llvm#68882 and https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699

Gep arithmetic isn't consistent with different types. GVNSink didn't realize this and sank all geps
as long as their operands can be wired via PHIs
in a post-dominator.

Fixes: llvm#85333
RalfJung pushed a commit to RalfJung/rust-analyzer that referenced this pull request Apr 27, 2024
Always generate GEP i8 / ptradd for struct offsets

This implements #98615, and goes a bit further to remove `struct_gep` entirely.

Upstream LLVM is in the beginning stages of [migrating to `ptradd`](https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699). LLVM 19 will [canonicalize](llvm/llvm-project#68882) all constant-offset GEPs to i8, which has roughly the same effect as this change.

Fixes #121719.

Split out from #121577.

r? `@nikic`
hiraditya added a commit to hiraditya/llvm-project that referenced this pull request Apr 27, 2024
As mentioned in llvm#68882 and https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699

Gep arithmetic isn't consistent with different types. GVNSink didn't realize this and sank all geps
as long as their operands can be wired via PHIs
in a post-dominator.

Fixes: llvm#85333
hiraditya added a commit to hiraditya/llvm-project that referenced this pull request Apr 29, 2024
As mentioned in llvm#68882 and https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699

Gep arithmetic isn't consistent with different types. GVNSink didn't realize this and sank all geps
as long as their operands can be wired via PHIs
in a post-dominator.

Fixes: llvm#85333
hiraditya added a commit to hiraditya/llvm-project that referenced this pull request Apr 30, 2024
As mentioned in llvm#68882 and https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699

Gep arithmetic isn't consistent with different types. GVNSink didn't realize this and sank all geps
as long as their operands can be wired via PHIs
in a post-dominator.

Fixes: llvm#85333
hiraditya added a commit to hiraditya/llvm-project that referenced this pull request Apr 30, 2024
As mentioned in llvm#68882 and https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699

Gep arithmetic isn't consistent with different types. GVNSink didn't realize this and sank all geps
as long as their operands can be wired via PHIs
in a post-dominator.

Fixes: llvm#85333
hiraditya added a commit that referenced this pull request Apr 30, 2024
As mentioned in #68882 and
https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699

Gep arithmetic isn't consistent with different types. GVNSink didn't
realize this and sank all geps
as long as their operands can be wired via PHIs
in a post-dominator.

Fixes: #85333
@sgundapa
Copy link

I've observed a significant regression in one of the AMDGPU benchmarks after applying this patch. The base address calculation within the unrolled loop seems to be the source. I've attached "before.log" and "after.log" files that detail the issue.

The modified GEP format, introduced by this patch, doesn't align with the canonical form expected by the "separate-constant-offset-from-gep" pass. Consequently, the "straight line strength reduction" (SLSR) pass cannot optimize the computation.

While the intention behind this patch, replicating some "split-gep" pass functionality, is understood, the unintended impact on the SLSR pass is notable.

Before I delve into potential solutions, I would greatly appreciate your insights and perspective on this matter.[
after.log
before.log
](url)

hiraditya added a commit to hiraditya/llvm-project that referenced this pull request May 13, 2024
As mentioned in llvm#68882 and https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699

Gep arithmetic isn't consistent with different types. GVNSink didn't realize this and sank all geps
as long as their operands can be wired via PHIs
in a post-dominator.

Fixes: llvm#85333
@nikic
Copy link
Contributor Author

nikic commented May 14, 2024

@sgundapa Does #90802 fix the issue you're seeing?

hiraditya added a commit that referenced this pull request May 14, 2024
As mentioned in #68882 and
https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699

Gep arithmetic isn't consistent with different types. GVNSink didn't
realize this and sank all geps as long as their operands can be wired
via PHIs
in a post-dominator.

Fixes: #85333
Reapply: #88440 after fixing the non-determinism issues in #90995
@sgundapa
Copy link

@sgundapa Does #90802 fix the issue you're seeing?

Unfortunately no.

@nikic
Copy link
Contributor Author

nikic commented May 15, 2024

@sgundapa Hm, I think the problem may be that while #90802 removes the limitation on the element types, it's still limited to single-index GEPs, while here there are multiple indices. (Assuming this is related to swapping the GEPs at all, I'm stabbing in the dark here.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backend:AMDGPU backend:RISC-V clang:openmp OpenMP related changes to Clang clang Clang issues not falling into any other category coroutines C++20 coroutines flang Flang issues not falling into any other category llvm:analysis llvm:transforms
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[clang] unnecessary conditions marked with [[likely]] or [[unlikely]] are not removed by the optimizer