Skip to content

Commit

Permalink
This reverts commit r266002, r266011 and r266016.
Browse files Browse the repository at this point in the history
They broke the msan bot.

Original message:

Add __atomic_* lowering to AtomicExpandPass.

AtomicExpandPass can now lower atomic load, atomic store, atomicrmw,and
cmpxchg instructions to __atomic_* library calls, when the target
doesn't support atomics of a given size.

This is the first step towards moving all atomic lowering from clang
into llvm. When all is done, the behavior of __sync_* builtins,
__atomic_* builtins, and C11 atomics will be unified.

Previously LLVM would pass everything through to the ISelLowering
code. There, unsupported atomic instructions would turn into __sync_*
library calls. Because of that behavior, Clang currently avoids emitting
llvm IR atomic instructions when this would happen, and emits __atomic_*
library functions itself, in the frontend.

This change makes LLVM able to emit __atomic_* libcalls, and thus will
eventually allow clang to depend on LLVM to do the right thing.

It is advantageous to do the new lowering to atomic libcalls in
AtomicExpandPass, before ISel time, because it's important that all
atomic operations for a given size either lower to __atomic_*
libcalls (which may use locks), or native instructions which won't. No
mixing and matching.

At the moment, this code is enabled only for SPARC, as a
demonstration. The next commit will expand support to all of the other
targets.

Differential Revision: http://reviews.llvm.org/D18200

llvm-svn: 266062
  • Loading branch information
espindola committed Apr 12, 2016
1 parent ee1590f commit d41b54b
Show file tree
Hide file tree
Showing 8 changed files with 22 additions and 1,074 deletions.
174 changes: 12 additions & 162 deletions llvm/docs/Atomics.rst
Expand Up @@ -413,28 +413,19 @@ The MachineMemOperand for all atomic operations is currently marked as volatile;
this is not correct in the IR sense of volatile, but CodeGen handles anything
marked volatile very conservatively. This should get fixed at some point.

One very important property of the atomic operations is that if your backend
supports any inline lock-free atomic operations of a given size, you should
support *ALL* operations of that size in a lock-free manner.

When the target implements atomic ``cmpxchg`` or LL/SC instructions (as most do)
this is trivial: all the other operations can be implemented on top of those
primitives. However, on many older CPUs (e.g. ARMv5, SparcV8, Intel 80386) there
are atomic load and store instructions, but no ``cmpxchg`` or LL/SC. As it is
invalid to implement ``atomic load`` using the native instruction, but
``cmpxchg`` using a library call to a function that uses a mutex, ``atomic
load`` must *also* expand to a library call on such architectures, so that it
can remain atomic with regards to a simultaneous ``cmpxchg``, by using the same
mutex.

AtomicExpandPass can help with that: it will expand all atomic operations to the
proper ``__atomic_*`` libcalls for any size above the maximum set by
``setMaxAtomicSizeInBitsSupported`` (which defaults to 0).
Common architectures have some way of representing at least a pointer-sized
lock-free ``cmpxchg``; such an operation can be used to implement all the other
atomic operations which can be represented in IR up to that size. Backends are
expected to implement all those operations, but not operations which cannot be
implemented in a lock-free manner. It is expected that backends will give an
error when given an operation which cannot be implemented. (The LLVM code
generator is not very helpful here at the moment, but hopefully that will
change.)

On x86, all atomic loads generate a ``MOV``. SequentiallyConsistent stores
generate an ``XCHG``, other stores generate a ``MOV``. SequentiallyConsistent
fences generate an ``MFENCE``, other fences do not cause any code to be
generated. ``cmpxchg`` uses the ``LOCK CMPXCHG`` instruction. ``atomicrmw xchg``
generated. cmpxchg uses the ``LOCK CMPXCHG`` instruction. ``atomicrmw xchg``
uses ``XCHG``, ``atomicrmw add`` and ``atomicrmw sub`` use ``XADD``, and all
other ``atomicrmw`` operations generate a loop with ``LOCK CMPXCHG``. Depending
on the users of the result, some ``atomicrmw`` operations can be translated into
Expand All @@ -455,151 +446,10 @@ atomic constructs. Here are some lowerings it can do:
``emitStoreConditional()``
* large loads/stores -> ll-sc/cmpxchg
by overriding ``shouldExpandAtomicStoreInIR()``/``shouldExpandAtomicLoadInIR()``
* strong atomic accesses -> monotonic accesses + fences by overriding
``shouldInsertFencesForAtomic()``, ``emitLeadingFence()``, and
``emitTrailingFence()``
* strong atomic accesses -> monotonic accesses + fences
by using ``setInsertFencesForAtomic()`` and overriding ``emitLeadingFence()``
and ``emitTrailingFence()``
* atomic rmw -> loop with cmpxchg or load-linked/store-conditional
by overriding ``expandAtomicRMWInIR()``
* expansion to __atomic_* libcalls for unsupported sizes.

For an example of all of these, look at the ARM backend.

Libcalls: __atomic_*
====================

There are two kinds of atomic library calls that are generated by LLVM. Please
note that both sets of library functions somewhat confusingly share the names of
builtin functions defined by clang. Despite this, the library functions are
not directly related to the builtins: it is *not* the case that ``__atomic_*``
builtins lower to ``__atomic_*`` library calls and ``__sync_*`` builtins lower
to ``__sync_*`` library calls.

The first set of library functions are named ``__atomic_*``. This set has been
"standardized" by GCC, and is described below. (See also `GCC's documentation
<https://gcc.gnu.org/wiki/Atomic/GCCMM/LIbrary>`_)

LLVM's AtomicExpandPass will translate atomic operations on data sizes above
``MaxAtomicSizeInBitsSupported`` into calls to these functions.

There are four generic functions, which can be called with data of any size or
alignment::

void __atomic_load(size_t size, void *ptr, void *ret, int ordering)
void __atomic_store(size_t size, void *ptr, void *val, int ordering)
void __atomic_exchange(size_t size, void *ptr, void *val, void *ret, int ordering)
bool __atomic_compare_exchange(size_t size, void *ptr, void *expected, void *desired, int success_order, int failure_order)

There are also size-specialized versions of the above functions, which can only
be used with *naturally-aligned* pointers of the appropriate size. In the
signatures below, "N" is one of 1, 2, 4, 8, and 16, and "iN" is the appropriate
integer type of that size; if no such integer type exists, the specialization
cannot be used::

iN __atomic_load_N(iN *ptr, iN val, int ordering)
void __atomic_store_N(iN *ptr, iN val, int ordering)
iN __atomic_exchange_N(iN *ptr, iN val, int ordering)
bool __atomic_compare_exchange_N(iN *ptr, iN *expected, iN desired, int success_order, int failure_order)

Finally there are some read-modify-write functions, which are only available in
the size-specific variants (any other sizes use a ``__atomic_compare_exchange``
loop)::

iN __atomic_fetch_add_N(iN *ptr, iN val, int ordering)
iN __atomic_fetch_sub_N(iN *ptr, iN val, int ordering)
iN __atomic_fetch_and_N(iN *ptr, iN val, int ordering)
iN __atomic_fetch_or_N(iN *ptr, iN val, int ordering)
iN __atomic_fetch_xor_N(iN *ptr, iN val, int ordering)
iN __atomic_fetch_nand_N(iN *ptr, iN val, int ordering)

This set of library functions have some interesting implementation requirements
to take note of:

- They support all sizes and alignments -- including those which cannot be
implemented natively on any existing hardware. Therefore, they will certainly
use mutexes in for some sizes/alignments.

- As a consequence, they cannot be shipped in a statically linked
compiler-support library, as they have state which must be shared amongst all
DSOs loaded in the program. They must be provided in a shared library used by
all objects.

- The set of atomic sizes supported lock-free must be a superset of the sizes
any compiler can emit. That is: if a new compiler introduces support for
inline-lock-free atomics of size N, the ``__atomic_*`` functions must also have a
lock-free implementation for size N. This is a requirement so that code
produced by an old compiler (which will have called the ``__atomic_*`` function)
interoperates with code produced by the new compiler (which will use native
the atomic instruction).

Note that it's possible to write an entirely target-independent implementation
of these library functions by using the compiler atomic builtins themselves to
implement the operations on naturally-aligned pointers of supported sizes, and a
generic mutex implementation otherwise.

Libcalls: __sync_*
==================

Some targets or OS/target combinations can support lock-free atomics, but for
various reasons, it is not practical to emit the instructions inline.

There's two typical examples of this.

Some CPUs support multiple instruction sets which can be swiched back and forth
on function-call boundaries. For example, MIPS supports the MIPS16 ISA, which
has a smaller instruction encoding than the usual MIPS32 ISA. ARM, similarly,
has the Thumb ISA. In MIPS16 and earlier versions of Thumb, the atomic
instructions are not encodable. However, those instructions are available via a
function call to a function with the longer encoding.

Additionally, a few OS/target pairs provide kernel-supported lock-free
atomics. ARM/Linux is an example of this: the kernel `provides
<https://www.kernel.org/doc/Documentation/arm/kernel_user_helpers.txt>`_ a
function which on older CPUs contains a "magically-restartable" atomic sequence
(which looks atomic so long as there's only one CPU), and contains actual atomic
instructions on newer multicore models. This sort of functionality can typically
be provided on any architecture, if all CPUs which are missing atomic
compare-and-swap support are uniprocessor (no SMP). This is almost always the
case. The only common architecture without that property is SPARC -- SPARCV8 SMP
systems were common, yet it doesn't support any sort of compare-and-swap
operation.

In either of these cases, the Target in LLVM can claim support for atomics of an
appropriate size, and then implement some subset of the operations via libcalls
to a ``__sync_*`` function. Such functions *must* not use locks in their
implementation, because unlike the ``__atomic_*`` routines used by
AtomicExpandPass, these may be mixed-and-matched with native instructions by the
target lowering.

Further, these routines do not need to be shared, as they are stateless. So,
there is no issue with having multiple copies included in one binary. Thus,
typically these routines are implemented by the statically-linked compiler
runtime support library.

LLVM will emit a call to an appropriate ``__sync_*`` routine if the target
ISelLowering code has set the corresponding ``ATOMIC_CMPXCHG``, ``ATOMIC_SWAP``,
or ``ATOMIC_LOAD_*`` operation to "Expand", and if it has opted-into the
availablity of those library functions via a call to ``initSyncLibcalls()``.

The full set of functions that may be called by LLVM is (for ``N`` being 1, 2,
4, 8, or 16)::

iN __sync_val_compare_and_swap_N(iN *ptr, iN expected, iN desired)
iN __sync_lock_test_and_set_N(iN *ptr, iN val)
iN __sync_fetch_and_add_N(iN *ptr, iN val)
iN __sync_fetch_and_sub_N(iN *ptr, iN val)
iN __sync_fetch_and_and_N(iN *ptr, iN val)
iN __sync_fetch_and_or_N(iN *ptr, iN val)
iN __sync_fetch_and_xor_N(iN *ptr, iN val)
iN __sync_fetch_and_nand_N(iN *ptr, iN val)
iN __sync_fetch_and_max_N(iN *ptr, iN val)
iN __sync_fetch_and_umax_N(iN *ptr, iN val)
iN __sync_fetch_and_min_N(iN *ptr, iN val)
iN __sync_fetch_and_umin_N(iN *ptr, iN val)

This list doesn't include any function for atomic load or store; all known
architectures support atomic loads and stores directly (possibly by emitting a
fence on either side of a normal load or store.)

There's also, somewhat separately, the possibility to lower ``ATOMIC_FENCE`` to
``__sync_synchronize()``. This may happen or not happen independent of all the
above, controlled purely by ``setOperationAction(ISD::ATOMIC_FENCE, ...)``.
73 changes: 1 addition & 72 deletions llvm/include/llvm/CodeGen/RuntimeLibcalls.h
Expand Up @@ -336,11 +336,7 @@ namespace RTLIB {
// EXCEPTION HANDLING
UNWIND_RESUME,

// Note: there's two sets of atomics libcalls; see
// <http://llvm.org/docs/Atomics.html> for more info on the
// difference between them.

// Atomic '__sync_*' libcalls.
// Family ATOMICs
SYNC_VAL_COMPARE_AND_SWAP_1,
SYNC_VAL_COMPARE_AND_SWAP_2,
SYNC_VAL_COMPARE_AND_SWAP_4,
Expand Down Expand Up @@ -402,73 +398,6 @@ namespace RTLIB {
SYNC_FETCH_AND_UMIN_8,
SYNC_FETCH_AND_UMIN_16,

// Atomic '__atomic_*' libcalls.
ATOMIC_LOAD,
ATOMIC_LOAD_1,
ATOMIC_LOAD_2,
ATOMIC_LOAD_4,
ATOMIC_LOAD_8,
ATOMIC_LOAD_16,

ATOMIC_STORE,
ATOMIC_STORE_1,
ATOMIC_STORE_2,
ATOMIC_STORE_4,
ATOMIC_STORE_8,
ATOMIC_STORE_16,

ATOMIC_EXCHANGE,
ATOMIC_EXCHANGE_1,
ATOMIC_EXCHANGE_2,
ATOMIC_EXCHANGE_4,
ATOMIC_EXCHANGE_8,
ATOMIC_EXCHANGE_16,

ATOMIC_COMPARE_EXCHANGE,
ATOMIC_COMPARE_EXCHANGE_1,
ATOMIC_COMPARE_EXCHANGE_2,
ATOMIC_COMPARE_EXCHANGE_4,
ATOMIC_COMPARE_EXCHANGE_8,
ATOMIC_COMPARE_EXCHANGE_16,

ATOMIC_FETCH_ADD_1,
ATOMIC_FETCH_ADD_2,
ATOMIC_FETCH_ADD_4,
ATOMIC_FETCH_ADD_8,
ATOMIC_FETCH_ADD_16,

ATOMIC_FETCH_SUB_1,
ATOMIC_FETCH_SUB_2,
ATOMIC_FETCH_SUB_4,
ATOMIC_FETCH_SUB_8,
ATOMIC_FETCH_SUB_16,

ATOMIC_FETCH_AND_1,
ATOMIC_FETCH_AND_2,
ATOMIC_FETCH_AND_4,
ATOMIC_FETCH_AND_8,
ATOMIC_FETCH_AND_16,

ATOMIC_FETCH_OR_1,
ATOMIC_FETCH_OR_2,
ATOMIC_FETCH_OR_4,
ATOMIC_FETCH_OR_8,
ATOMIC_FETCH_OR_16,

ATOMIC_FETCH_XOR_1,
ATOMIC_FETCH_XOR_2,
ATOMIC_FETCH_XOR_4,
ATOMIC_FETCH_XOR_8,
ATOMIC_FETCH_XOR_16,

ATOMIC_FETCH_NAND_1,
ATOMIC_FETCH_NAND_2,
ATOMIC_FETCH_NAND_4,
ATOMIC_FETCH_NAND_8,
ATOMIC_FETCH_NAND_16,

ATOMIC_IS_LOCK_FREE,

// Stack Protector Fail.
STACKPROTECTOR_CHECK_FAIL,

Expand Down
19 changes: 0 additions & 19 deletions llvm/include/llvm/Target/TargetLowering.h
Expand Up @@ -1059,14 +1059,6 @@ class TargetLoweringBase {
/// \name Helpers for atomic expansion.
/// @{

/// Returns the maximum atomic operation size (in bits) supported by
/// the backend. Atomic operations greater than this size (as well
/// as ones that are not naturally aligned), will be expanded by
/// AtomicExpandPass into an __atomic_* library call.
unsigned getMaxAtomicSizeInBitsSupported() const {
return MaxAtomicSizeInBitsSupported;
}

/// Whether AtomicExpandPass should automatically insert fences and reduce
/// ordering for this atomic. This should be true for most architectures with
/// weak memory ordering. Defaults to false.
Expand Down Expand Up @@ -1464,14 +1456,6 @@ class TargetLoweringBase {
MinStackArgumentAlignment = Align;
}

/// Set the maximum atomic operation size supported by the
/// backend. Atomic operations greater than this size (as well as
/// ones that are not naturally aligned), will be expanded by
/// AtomicExpandPass into an __atomic_* library call.
void setMaxAtomicSizeInBitsSupported(unsigned SizeInBits) {
MaxAtomicSizeInBitsSupported = SizeInBits;
}

public:
//===--------------------------------------------------------------------===//
// Addressing mode description hooks (used by LSR etc).
Expand Down Expand Up @@ -1881,9 +1865,6 @@ class TargetLoweringBase {
/// The preferred loop alignment.
unsigned PrefLoopAlignment;

/// Size in bits of the maximum atomics size the backend supports.
/// Accesses larger than this will be expanded by AtomicExpandPass.
unsigned MaxAtomicSizeInBitsSupported;

/// If set to a physical register, this specifies the register that
/// llvm.savestack/llvm.restorestack should save and restore.
Expand Down

0 comments on commit d41b54b

Please sign in to comment.