Skip to content

Commit

Permalink
[X86][GlobalISEL] Support lowering aligned unordered atomics
Browse files Browse the repository at this point in the history
The existing lowering code is accidentally correct for unordered atomics as far as I can tell. An unordered atomic has no memory ordering, and simply requires the actual load or store to be done as a single well aligned instruction. As such, relax the restriction while adding tests to ensure the lowering remains correct in the future.

Differential Revision: https://reviews.llvm.org/D57803

llvm-svn: 356280
  • Loading branch information
preames committed Mar 15, 2019
1 parent 44ed286 commit d238bf7
Show file tree
Hide file tree
Showing 2 changed files with 967 additions and 3 deletions.
18 changes: 15 additions & 3 deletions llvm/lib/Target/X86/X86InstructionSelector.cpp
Expand Up @@ -512,10 +512,22 @@ bool X86InstructionSelector::selectLoadStoreOp(MachineInstr &I,
LLT Ty = MRI.getType(DefReg);
const RegisterBank &RB = *RBI.getRegBank(DefReg, MRI, TRI);

assert(I.hasOneMemOperand());
auto &MemOp = **I.memoperands_begin();
if (MemOp.getOrdering() != AtomicOrdering::NotAtomic) {
LLVM_DEBUG(dbgs() << "Atomic load/store not supported yet\n");
return false;
if (MemOp.isAtomic()) {
// Note: for unordered operations, we rely on the fact the appropriate MMO
// is already on the instruction we're mutating, and thus we don't need to
// make any changes. So long as we select an opcode which is capable of
// loading or storing the appropriate size atomically, the rest of the
// backend is required to respect the MMO state.
if (!MemOp.isUnordered()) {
LLVM_DEBUG(dbgs() << "Atomic ordering not supported yet\n");
return false;
}
if (MemOp.getAlignment() < Ty.getSizeInBits()/8) {
LLVM_DEBUG(dbgs() << "Unaligned atomics not supported yet\n");
return false;
}
}

unsigned NewOpc = getLoadStoreOp(Ty, RB, Opc, MemOp.getAlignment());
Expand Down

0 comments on commit d238bf7

Please sign in to comment.