Skip to content

Commit

Permalink
[ARM] Be super conservative about atomics
Browse files Browse the repository at this point in the history
As requested during review of D57601 <https://reviews.llvm.org/D57601> https://reviews.llvm.org/D57601, be equally conservative for atomic MMOs as for volatile MMOs in all in tree backends. At the moment, all atomic MMOs are also volatile, but I'm about to change that.

Differential Revision: https://reviews.llvm.org/D58490

Note: D58498 landed in several pieces as individual backends were approved.  This is the last chunk.
llvm-svn: 354845
  • Loading branch information
preames committed Feb 26, 2019
1 parent d2a56ac commit 38b14e3
Showing 1 changed file with 5 additions and 2 deletions.
7 changes: 5 additions & 2 deletions llvm/lib/Target/ARM/ARMLoadStoreOptimizer.cpp
Expand Up @@ -1580,7 +1580,9 @@ static bool isMemoryOp(const MachineInstr &MI) {
const MachineMemOperand &MMO = **MI.memoperands_begin();

// Don't touch volatile memory accesses - we may be changing their order.
if (MMO.isVolatile())
// TODO: We could allow unordered and monotonic atomics here, but we need to
// make sure the resulting ldm/stm is correctly marked as atomic.
if (MMO.isVolatile() || MMO.isAtomic())
return false;

// Unaligned ldr/str is emulated by some kernels, but unaligned ldm/stm is
Expand Down Expand Up @@ -2144,7 +2146,8 @@ ARMPreAllocLoadStoreOpt::CanFormLdStDWord(MachineInstr *Op0, MachineInstr *Op1,
// At the moment, we ignore the memoryoperand's value.
// If we want to use AliasAnalysis, we should check it accordingly.
if (!Op0->hasOneMemOperand() ||
(*Op0->memoperands_begin())->isVolatile())
(*Op0->memoperands_begin())->isVolatile() ||
(*Op0->memoperands_begin())->isAtomic())
return false;

unsigned Align = (*Op0->memoperands_begin())->getAlignment();
Expand Down

0 comments on commit 38b14e3

Please sign in to comment.