From 0302eb2a2da6c512b19db13cc5848c2c4e462f7c Mon Sep 17 00:00:00 2001 From: Lars T Hansen Date: Tue, 21 Aug 2018 21:13:12 +0200 Subject: [PATCH 1/9] Bug 1394420 - jit-generate atomic ops to be called from c++. r=nbp, r=froydnj SpiderMonkey (and eventually DOM) will sometimes access shared memory from multiple threads without synchronization; this is a natural consequence of the JS memory model + JS/DOM specs. We have always had a hardware-specific abstraction layer for these accesses, to isolate code from the details of how unsynchronized / racy access is handled. This layer has been written in C++ and has several problems: - In C++, racy access is undefined behavior, and the abstraction layer is therefore inherently unsafe, especially in the presence of inlining, PGO, and clever compilers. (And TSAN will start complaining, too.) - Some of the compiler intrinsics that are used in the C++ abstraction layer are not actually the right primitives -- they assume C++, ie non-racy, semantics, and may not implement the correct barriers in all cases. - There are few guarantees that the synchronization implemented by the C++ primitives is actually compatible with the synchronization used by jitted code. - While x86 and ARM have 8-byte synchronized access (CMPXCHG8B and LDREXD/STREXD), some C++ compilers do not support their use well or at all, leading to occasional hardship for porting teams. This patch solves all these problems by jit-generating the racy access abstraction layer in the form of C++-compatible functions that: do not trigger UB in the C++ code; do not depend on possibly-incorrect intrinsics but instead always emit the proper barriers; are guaranteed to be JIT-compatible; and support x86 properly. Mostly this code is straightforward: each access function is a short, nearly prologue- and epilogue-less, sequence of instructions that performs a normal load or store or appropriately synchronized operation (CMPXCHG or similar). Safe-for-races memcpy and memmove are trickier but are handled by combining some C++ code with several jit-generated functions that perform unrolled copies for various block sizes and alignments. The performance story is not completely satisfactory: On the one hand, we don't regress anything because copying unshared-to-unshared we do not use the new primitives but instead the C++ compiler's optimized memcpy and standard memory loads and stores. On the other hand, performance with shared memory is lower than performance with unshared memory. TypedArray.prototype.set() is a good test case. When the source and target arrays have the same type, the engine uses a memcpy; shared memory copying is 3x slower than unshared memory for 100,000 8K copies (Uint8). However, when the source and target arrays are slightly different types (Uint8 vs Int8) the engine uses individual loads and stores, which for shared memory turns into two calls per byte being moved; in this case, shared memory is 127x slower than unshared memory. (All numbers on x64 Linux.) Can we live with the very significant slowdown in the latter case? It depends on the applications we envision for shared memory. Primarily, shared memory will be used as wasm heap memory, in which case most applications that need to move data will use all Uint8Array arrays and the slowdown is OK. But it is clearly a type of performance cliff. We can reduce the overhead by jit-generating more code, specifically code to perform the load, convert, and store in common cases. More interestingly, and simpler, we can probably use memcpy in all cases by copying first (fairly fast) and then running a local fixup. A bug should be filed for this but IMO we're OK with the current solution. (Memcpy can also be further sped up in platform-specific ways by generating cleverer code that uses REP MOVS or SIMD or similar.) --HG-- extra : rebase_source : aada616ffa85501af6477a3e9867cfaa79f3b8e0 --- .../jit-test/tests/atomics/memcpy-fidelity.js | 181 +++ js/src/jit/AtomicOperations.h | 19 +- js/src/jit/MacroAssembler.h | 48 +- js/src/jit/arm/AtomicOperations-arm.h | 9 + js/src/jit/arm/MacroAssembler-arm.cpp | 29 +- js/src/jit/arm64/AtomicOperations-arm64-gcc.h | 9 + .../jit/arm64/AtomicOperations-arm64-msvc.h | 9 + js/src/jit/arm64/MacroAssembler-arm64.cpp | 23 + .../AtomicOperations-mips-shared.h | 9 + js/src/jit/moz.build | 5 +- .../jit/none/AtomicOperations-feeling-lucky.h | 9 + .../shared/AtomicOperations-shared-jit.cpp | 1018 +++++++++++++++++ .../jit/shared/AtomicOperations-shared-jit.h | 605 ++++++++++ js/src/jit/x64/MacroAssembler-x64.cpp | 50 +- js/src/jit/x86-shared/Assembler-x86-shared.h | 20 +- .../AtomicOperations-x86-shared-gcc.h | 9 + .../AtomicOperations-x86-shared-msvc.h | 9 + js/src/vm/Initialization.cpp | 5 + 18 files changed, 2032 insertions(+), 34 deletions(-) create mode 100644 js/src/jit-test/tests/atomics/memcpy-fidelity.js create mode 100644 js/src/jit/shared/AtomicOperations-shared-jit.cpp create mode 100644 js/src/jit/shared/AtomicOperations-shared-jit.h diff --git a/js/src/jit-test/tests/atomics/memcpy-fidelity.js b/js/src/jit-test/tests/atomics/memcpy-fidelity.js new file mode 100644 index 0000000000000..81eb63fba28a3 --- /dev/null +++ b/js/src/jit-test/tests/atomics/memcpy-fidelity.js @@ -0,0 +1,181 @@ +// In order not to run afoul of C++ UB we have our own non-C++ definitions of +// operations (they are actually jitted) that can operate racily on shared +// memory, see jit/shared/AtomicOperations-shared-jit.cpp. +// +// Operations on fixed-width 1, 2, 4, and 8 byte data are adequately tested +// elsewhere. Here we specifically test our safe-when-racy replacements of +// memcpy and memmove. +// +// There are two primitives in the engine, memcpy_down and memcpy_up. These are +// equivalent except when data overlap, in which case memcpy_down handles +// overlapping copies that move from higher to lower addresses and memcpy_up +// handles ditto from lower to higher. memcpy uses memcpy_down always while +// memmove selects the one to use dynamically based on its arguments. + +// Basic memcpy algorithm to be tested: +// +// - if src and target have the same alignment +// - byte copy up to word alignment +// - block copy as much as possible +// - word copy as much as possible +// - byte copy any tail +// - else if on a platform that can deal with unaligned access +// (ie, x86, ARM64, and ARM if the proper flag is set) +// - block copy as much as possible +// - word copy as much as possible +// - byte copy any tail +// - else // on a platform that can't deal with unaligned access +// (ie ARM without the flag or x86 DEBUG builds with the +// JS_NO_UNALIGNED_MEMCPY env var) +// - block copy with byte copies +// - word copy with byte copies +// - byte copy any tail + +var target_buf = new SharedArrayBuffer(1024); +var src_buf = new SharedArrayBuffer(1024); + +/////////////////////////////////////////////////////////////////////////// +// +// Different src and target buffer, this is memcpy "move down". The same +// code is used in the engine for overlapping buffers when target addresses +// are lower than source addresses. + +fill(src_buf); + +// Basic 1K perfectly aligned copy, copies blocks only. +{ + let target = new Uint8Array(target_buf); + let src = new Uint8Array(src_buf); + clear(target_buf); + target.set(src); + check(target_buf, 0, 1024, 0); +} + +// Buffers are equally aligned but not on a word boundary and not ending on a +// word boundary either, so this will copy first some bytes, then some blocks, +// then some words, and then some bytes. +{ + let fill = 0x79; + clear(target_buf, fill); + let target = new Uint8Array(target_buf, 1, 1022); + let src = new Uint8Array(src_buf, 1, 1022); + target.set(src); + check_fill(target_buf, 0, 1, fill); + check(target_buf, 1, 1023, 1); + check_fill(target_buf, 1023, 1024, fill); +} + +// Buffers are unequally aligned, we'll copy bytes only on some platforms and +// unaligned blocks/words on others. +{ + clear(target_buf); + let target = new Uint8Array(target_buf, 0, 1023); + let src = new Uint8Array(src_buf, 1); + target.set(src); + check(target_buf, 0, 1023, 1); + check_zero(target_buf, 1023, 1024); +} + +/////////////////////////////////////////////////////////////////////////// +// +// Overlapping src and target buffer and the target addresses are always +// higher than the source addresses, this is memcpy "move up" + +// Buffers are equally aligned but not on a word boundary and not ending on a +// word boundary either, so this will copy first some bytes, then some blocks, +// then some words, and then some bytes. +{ + fill(target_buf); + let target = new Uint8Array(target_buf, 9, 999); + let src = new Uint8Array(target_buf, 1, 999); + target.set(src); + check(target_buf, 9, 1008, 1); + check(target_buf, 1008, 1024, 1008 & 255); +} + +// Buffers are unequally aligned, we'll copy bytes only on some platforms and +// unaligned blocks/words on others. +{ + fill(target_buf); + let target = new Uint8Array(target_buf, 2, 1022); + let src = new Uint8Array(target_buf, 1, 1022); + target.set(src); + check(target_buf, 2, 1024, 1); +} + +/////////////////////////////////////////////////////////////////////////// +// +// Copy 0 to 127 bytes from and to a variety of addresses to check that we +// handle limits properly in these edge cases. + +// Too slow in debug-noopt builds but we don't want to flag the test as slow, +// since that means it'll never be run. + +if (this.getBuildConfiguration && !getBuildConfiguration().debug) +{ + let t = new Uint8Array(target_buf); + for (let my_src_buf of [src_buf, target_buf]) { + for (let size=0; size < 127; size++) { + for (let src_offs=0; src_offs < 8; src_offs++) { + for (let target_offs=0; target_offs < 8; target_offs++) { + clear(target_buf, Math.random()*255); + let target = new Uint8Array(target_buf, target_offs, size); + + // Zero is boring + let bias = (Math.random() * 100 % 12) | 0; + + // Note src may overlap target partially + let src = new Uint8Array(my_src_buf, src_offs, size); + for ( let i=0; i < size; i++ ) + src[i] = i+bias; + + // We expect these values to be unchanged by the copy + let below = target_offs > 0 ? t[target_offs - 1] : 0; + let above = t[target_offs + size]; + + // Copy + target.set(src); + + // Verify + check(target_buf, target_offs, target_offs + size, bias); + if (target_offs > 0) + assertEq(t[target_offs-1], below); + assertEq(t[target_offs+size], above); + } + } + } + } +} + + +// Utilities + +function clear(buf, fill) { + let a = new Uint8Array(buf); + for ( let i=0; i < a.length; i++ ) + a[i] = fill; +} + +function fill(buf) { + let a = new Uint8Array(buf); + for ( let i=0; i < a.length; i++ ) + a[i] = i & 255 +} + +function check(buf, from, to, startingWith) { + let a = new Uint8Array(buf); + for ( let i=from; i < to; i++ ) { + assertEq(a[i], startingWith); + startingWith = (startingWith + 1) & 255; + } +} + +function check_zero(buf, from, to) { + check_fill(buf, from, to, 0); +} + +function check_fill(buf, from, to, fill) { + let a = new Uint8Array(buf); + for ( let i=from; i < to; i++ ) + assertEq(a[i], fill); +} diff --git a/js/src/jit/AtomicOperations.h b/js/src/jit/AtomicOperations.h index 420ec8d9ccdc1..c70340453b49f 100644 --- a/js/src/jit/AtomicOperations.h +++ b/js/src/jit/AtomicOperations.h @@ -147,6 +147,13 @@ class AtomicOperations { size_t nbytes); public: + // On some platforms we generate code for the atomics at run-time; that + // happens here. + static bool Initialize(); + + // Deallocate the code segment for generated atomics functions. + static void ShutDown(); + // Test lock-freedom for any int32 value. This implements the // Atomics::isLockFree() operation in the ECMAScript Shared Memory and // Atomics specification, as follows: @@ -355,7 +362,9 @@ inline bool AtomicOperations::isLockfreeJS(int32_t size) { # endif #elif defined(__x86_64__) || defined(_M_X64) || defined(__i386__) || \ defined(_M_IX86) -# if defined(__clang__) || defined(__GNUC__) +# if defined(JS_CODEGEN_X86) || defined(JS_CODEGEN_X64) +# include "jit/shared/AtomicOperations-shared-jit.h" +# elif defined(__clang__) || defined(__GNUC__) # include "jit/x86-shared/AtomicOperations-x86-shared-gcc.h" # elif defined(_MSC_VER) # include "jit/x86-shared/AtomicOperations-x86-shared-msvc.h" @@ -363,13 +372,17 @@ inline bool AtomicOperations::isLockfreeJS(int32_t size) { # error "No AtomicOperations support for this platform+compiler combination" # endif #elif defined(__arm__) -# if defined(__clang__) || defined(__GNUC__) +# if defined(JS_CODEGEN_ARM) +# include "jit/shared/AtomicOperations-shared-jit.h" +# elif defined(__clang__) || defined(__GNUC__) # include "jit/arm/AtomicOperations-arm.h" # else # error "No AtomicOperations support for this platform+compiler combination" # endif #elif defined(__aarch64__) || defined(_M_ARM64) -# if defined(__clang__) || defined(__GNUC__) +# if defined(JS_CODEGEN_ARM64) +# include "jit/shared/AtomicOperations-shared-jit.h" +# elif defined(__clang__) || defined(__GNUC__) # include "jit/arm64/AtomicOperations-arm64-gcc.h" # elif defined(_MSC_VER) # include "jit/arm64/AtomicOperations-arm64-msvc.h" diff --git a/js/src/jit/MacroAssembler.h b/js/src/jit/MacroAssembler.h index edbb567c9a94d..aeffeb763619c 100644 --- a/js/src/jit/MacroAssembler.h +++ b/js/src/jit/MacroAssembler.h @@ -977,13 +977,13 @@ class MacroAssembler : public MacroAssemblerSpecific { // =============================================================== // Shift functions - // For shift-by-register there may be platform-specific - // variations, for example, x86 will perform the shift mod 32 but - // ARM will perform the shift mod 256. + // For shift-by-register there may be platform-specific variations, for + // example, x86 will perform the shift mod 32 but ARM will perform the shift + // mod 256. // - // For shift-by-immediate the platform assembler may restrict the - // immediate, for example, the ARM assembler requires the count - // for 32-bit shifts to be in the range [0,31]. + // For shift-by-immediate the platform assembler may restrict the immediate, + // for example, the ARM assembler requires the count for 32-bit shifts to be + // in the range [0,31]. inline void lshift32(Imm32 shift, Register srcDest) PER_SHARED_ARCH; inline void rshift32(Imm32 shift, Register srcDest) PER_SHARED_ARCH; @@ -1947,6 +1947,14 @@ class MacroAssembler : public MacroAssemblerSpecific { Register offsetTemp, Register maskTemp, Register output) DEFINED_ON(mips_shared); + // x64: `output` must be rax. + // ARM: Registers must be distinct; `replacement` and `output` must be + // (even,odd) pairs. + + void compareExchange64(const Synchronization& sync, const Address& mem, + Register64 expected, Register64 replacement, + Register64 output) DEFINED_ON(arm, arm64, x64); + // Exchange with memory. Return the value initially in memory. // MIPS: `valueTemp`, `offsetTemp` and `maskTemp` must be defined for 8-bit // and 16-bit wide operations. @@ -1969,6 +1977,10 @@ class MacroAssembler : public MacroAssemblerSpecific { Register offsetTemp, Register maskTemp, Register output) DEFINED_ON(mips_shared); + void atomicExchange64(const Synchronization& sync, const Address& mem, + Register64 value, Register64 output) + DEFINED_ON(arm64, x64); + // Read-modify-write with memory. Return the value in memory before the // operation. // @@ -2010,6 +2022,15 @@ class MacroAssembler : public MacroAssemblerSpecific { Register valueTemp, Register offsetTemp, Register maskTemp, Register output) DEFINED_ON(mips_shared); + // x64: + // For Add and Sub, `temp` must be invalid. + // For And, Or, and Xor, `output` must be eax and `temp` must have a byte + // subregister. + + void atomicFetchOp64(const Synchronization& sync, AtomicOp op, + Register64 value, const Address& mem, Register64 temp, + Register64 output) DEFINED_ON(arm64, x64); + // ======================================================================== // Wasm atomic operations. // @@ -2133,11 +2154,13 @@ class MacroAssembler : public MacroAssemblerSpecific { const BaseIndex& mem, Register64 temp, Register64 output) DEFINED_ON(arm, mips32, x86); - // x86: `expected` must be the same as `output`, and must be edx:eax - // x86: `replacement` must be ecx:ebx + // x86: `expected` must be the same as `output`, and must be edx:eax. + // x86: `replacement` must be ecx:ebx. // x64: `output` must be rax. // ARM: Registers must be distinct; `replacement` and `output` must be - // (even,odd) pairs. MIPS: Registers must be distinct. + // (even,odd) pairs. + // ARM64: The base register in `mem` must not overlap `output`. + // MIPS: Registers must be distinct. void wasmCompareExchange64(const wasm::MemoryAccessDesc& access, const Address& mem, Register64 expected, @@ -2151,7 +2174,8 @@ class MacroAssembler : public MacroAssemblerSpecific { // x86: `value` must be ecx:ebx; `output` must be edx:eax. // ARM: Registers must be distinct; `value` and `output` must be (even,odd) - // pairs. MIPS: Registers must be distinct. + // pairs. + // MIPS: Registers must be distinct. void wasmAtomicExchange64(const wasm::MemoryAccessDesc& access, const Address& mem, Register64 value, @@ -2164,7 +2188,9 @@ class MacroAssembler : public MacroAssemblerSpecific { // x86: `output` must be edx:eax, `temp` must be ecx:ebx. // x64: For And, Or, and Xor `output` must be rax. // ARM: Registers must be distinct; `temp` and `output` must be (even,odd) - // pairs. MIPS: Registers must be distinct. MIPS32: `temp` should be invalid. + // pairs. + // MIPS: Registers must be distinct. + // MIPS32: `temp` should be invalid. void wasmAtomicFetchOp64(const wasm::MemoryAccessDesc& access, AtomicOp op, Register64 value, const Address& mem, diff --git a/js/src/jit/arm/AtomicOperations-arm.h b/js/src/jit/arm/AtomicOperations-arm.h index b65709b4e5667..403079d5283a1 100644 --- a/js/src/jit/arm/AtomicOperations-arm.h +++ b/js/src/jit/arm/AtomicOperations-arm.h @@ -32,6 +32,15 @@ # error "This file only for gcc-compatible compilers" #endif +inline bool js::jit::AtomicOperations::Initialize() { + // Nothing + return true; +} + +inline void js::jit::AtomicOperations::ShutDown() { + // Nothing +} + inline bool js::jit::AtomicOperations::hasAtomic8() { // This guard is really only for tier-2 and tier-3 systems: LDREXD and // STREXD have been available since ARMv6K, and only ARMv7 and later are diff --git a/js/src/jit/arm/MacroAssembler-arm.cpp b/js/src/jit/arm/MacroAssembler-arm.cpp index ddc562cf26027..17ec4e3f42122 100644 --- a/js/src/jit/arm/MacroAssembler-arm.cpp +++ b/js/src/jit/arm/MacroAssembler-arm.cpp @@ -5290,10 +5290,11 @@ void MacroAssembler::wasmAtomicLoad64(const wasm::MemoryAccessDesc& access, } template -static void WasmCompareExchange64(MacroAssembler& masm, - const wasm::MemoryAccessDesc& access, - const T& mem, Register64 expect, - Register64 replace, Register64 output) { +static void CompareExchange64(MacroAssembler& masm, + const wasm::MemoryAccessDesc* access, + const Synchronization& sync, const T& mem, + Register64 expect, Register64 replace, + Register64 output) { MOZ_ASSERT(expect != replace && replace != output && output != expect); MOZ_ASSERT((replace.low.code() & 1) == 0); @@ -5308,11 +5309,13 @@ static void WasmCompareExchange64(MacroAssembler& masm, SecondScratchRegisterScope scratch2(masm); Register ptr = ComputePointerForAtomic(masm, mem, scratch2); - masm.memoryBarrierBefore(access.sync()); + masm.memoryBarrierBefore(sync); masm.bind(&again); BufferOffset load = masm.as_ldrexd(output.low, output.high, ptr); - masm.append(access, load.getOffset()); + if (access) { + masm.append(*access, load.getOffset()); + } masm.as_cmp(output.low, O2Reg(expect.low)); masm.as_cmp(output.high, O2Reg(expect.high), MacroAssembler::Equal); @@ -5326,7 +5329,7 @@ static void WasmCompareExchange64(MacroAssembler& masm, masm.as_b(&again, MacroAssembler::Equal); masm.bind(&done); - masm.memoryBarrierAfter(access.sync()); + masm.memoryBarrierAfter(sync); } void MacroAssembler::wasmCompareExchange64(const wasm::MemoryAccessDesc& access, @@ -5334,7 +5337,8 @@ void MacroAssembler::wasmCompareExchange64(const wasm::MemoryAccessDesc& access, Register64 expect, Register64 replace, Register64 output) { - WasmCompareExchange64(*this, access, mem, expect, replace, output); + CompareExchange64(*this, &access, access.sync(), mem, expect, replace, + output); } void MacroAssembler::wasmCompareExchange64(const wasm::MemoryAccessDesc& access, @@ -5342,7 +5346,14 @@ void MacroAssembler::wasmCompareExchange64(const wasm::MemoryAccessDesc& access, Register64 expect, Register64 replace, Register64 output) { - WasmCompareExchange64(*this, access, mem, expect, replace, output); + CompareExchange64(*this, &access, access.sync(), mem, expect, replace, + output); +} + +void MacroAssembler::compareExchange64(const Synchronization& sync, + const Address& mem, Register64 expect, + Register64 replace, Register64 output) { + CompareExchange64(*this, nullptr, sync, mem, expect, replace, output); } template diff --git a/js/src/jit/arm64/AtomicOperations-arm64-gcc.h b/js/src/jit/arm64/AtomicOperations-arm64-gcc.h index 07b7901c3c3c8..5e406a5369557 100644 --- a/js/src/jit/arm64/AtomicOperations-arm64-gcc.h +++ b/js/src/jit/arm64/AtomicOperations-arm64-gcc.h @@ -18,6 +18,15 @@ # error "This file only for gcc-compatible compilers" #endif +inline bool js::jit::AtomicOperations::Initialize() { + // Nothing + return true; +} + +inline void js::jit::AtomicOperations::ShutDown() { + // Nothing +} + inline bool js::jit::AtomicOperations::hasAtomic8() { return true; } inline bool js::jit::AtomicOperations::isLockfree8() { diff --git a/js/src/jit/arm64/AtomicOperations-arm64-msvc.h b/js/src/jit/arm64/AtomicOperations-arm64-msvc.h index 4a70d9867cf9d..69b6dc424a926 100644 --- a/js/src/jit/arm64/AtomicOperations-arm64-msvc.h +++ b/js/src/jit/arm64/AtomicOperations-arm64-msvc.h @@ -37,6 +37,15 @@ // Note, _InterlockedCompareExchange takes the *new* value as the second // argument and the *comparand* (expected old value) as the third argument. +inline bool js::jit::AtomicOperations::Initialize() { + // Nothing + return true; +} + +inline void js::jit::AtomicOperations::ShutDown() { + // Nothing +} + inline bool js::jit::AtomicOperations::hasAtomic8() { return true; } inline bool js::jit::AtomicOperations::isLockfree8() { diff --git a/js/src/jit/arm64/MacroAssembler-arm64.cpp b/js/src/jit/arm64/MacroAssembler-arm64.cpp index 330405583e81f..295eb9a136662 100644 --- a/js/src/jit/arm64/MacroAssembler-arm64.cpp +++ b/js/src/jit/arm64/MacroAssembler-arm64.cpp @@ -1604,6 +1604,8 @@ static void CompareExchange(MacroAssembler& masm, Register scratch2 = temps.AcquireX().asUnsized(); MemOperand ptr = ComputePointerForAtomic(masm, mem, scratch2); + MOZ_ASSERT(ptr.base().asUnsized() != output); + masm.memoryBarrierBefore(sync); Register scratch = temps.AcquireX().asUnsized(); @@ -1707,6 +1709,27 @@ void MacroAssembler::compareExchange(Scalar::Type type, output); } +void MacroAssembler::compareExchange64(const Synchronization& sync, + const Address& mem, Register64 expect, + Register64 replace, Register64 output) { + CompareExchange(*this, nullptr, Scalar::Int64, Width::_64, sync, mem, + expect.reg, replace.reg, output.reg); +} + +void MacroAssembler::atomicExchange64(const Synchronization& sync, + const Address& mem, Register64 value, + Register64 output) { + AtomicExchange(*this, nullptr, Scalar::Int64, Width::_64, sync, mem, + value.reg, output.reg); +} + +void MacroAssembler::atomicFetchOp64(const Synchronization& sync, AtomicOp op, + Register64 value, const Address& mem, + Register64 temp, Register64 output) { + AtomicFetchOp(*this, nullptr, Scalar::Int64, Width::_64, sync, op, mem, + value.reg, temp.reg, output.reg); +} + void MacroAssembler::wasmCompareExchange(const wasm::MemoryAccessDesc& access, const Address& mem, Register oldval, Register newval, Register output) { diff --git a/js/src/jit/mips-shared/AtomicOperations-mips-shared.h b/js/src/jit/mips-shared/AtomicOperations-mips-shared.h index 7336532e69ae3..021f13164296d 100644 --- a/js/src/jit/mips-shared/AtomicOperations-mips-shared.h +++ b/js/src/jit/mips-shared/AtomicOperations-mips-shared.h @@ -61,6 +61,15 @@ struct MOZ_RAII AddressGuard { } // namespace jit } // namespace js +inline bool js::jit::AtomicOperations::Initialize() { + // Nothing + return true; +} + +inline void js::jit::AtomicOperations::ShutDown() { + // Nothing +} + inline bool js::jit::AtomicOperations::hasAtomic8() { return true; } inline bool js::jit::AtomicOperations::isLockfree8() { diff --git a/js/src/jit/moz.build b/js/src/jit/moz.build index 17b36f89ffbb3..e05c5f543d03b 100644 --- a/js/src/jit/moz.build +++ b/js/src/jit/moz.build @@ -112,6 +112,7 @@ if not CONFIG['ENABLE_ION']: elif CONFIG['JS_CODEGEN_X86'] or CONFIG['JS_CODEGEN_X64']: LOpcodesGenerated.inputs += ['x86-shared/LIR-x86-shared.h'] UNIFIED_SOURCES += [ + 'shared/AtomicOperations-shared-jit.cpp', 'x86-shared/Architecture-x86-shared.cpp', 'x86-shared/Assembler-x86-shared.cpp', 'x86-shared/AssemblerBuffer-x86-shared.cpp', @@ -154,6 +155,7 @@ elif CONFIG['JS_CODEGEN_ARM']: 'arm/MacroAssembler-arm.cpp', 'arm/MoveEmitter-arm.cpp', 'arm/Trampoline-arm.cpp', + 'shared/AtomicOperations-shared-jit.cpp', ] if CONFIG['JS_SIMULATOR_ARM']: UNIFIED_SOURCES += [ @@ -185,7 +187,8 @@ elif CONFIG['JS_CODEGEN_ARM64']: 'arm64/vixl/MozAssembler-vixl.cpp', 'arm64/vixl/MozCpu-vixl.cpp', 'arm64/vixl/MozInstructions-vixl.cpp', - 'arm64/vixl/Utils-vixl.cpp' + 'arm64/vixl/Utils-vixl.cpp', + 'shared/AtomicOperations-shared-jit.cpp', ] if CONFIG['JS_SIMULATOR_ARM64']: UNIFIED_SOURCES += [ diff --git a/js/src/jit/none/AtomicOperations-feeling-lucky.h b/js/src/jit/none/AtomicOperations-feeling-lucky.h index 21243a1acefeb..89852734d7d31 100644 --- a/js/src/jit/none/AtomicOperations-feeling-lucky.h +++ b/js/src/jit/none/AtomicOperations-feeling-lucky.h @@ -100,6 +100,15 @@ // Try to avoid platform #ifdefs below this point. +inline bool js::jit::AtomicOperations::Initialize() { + // Nothing + return true; +} + +inline void js::jit::AtomicOperations::ShutDown() { + // Nothing +} + #ifdef GNUC_COMPATIBLE inline bool js::jit::AtomicOperations::hasAtomic8() { diff --git a/js/src/jit/shared/AtomicOperations-shared-jit.cpp b/js/src/jit/shared/AtomicOperations-shared-jit.cpp new file mode 100644 index 0000000000000..fd0a1a109339d --- /dev/null +++ b/js/src/jit/shared/AtomicOperations-shared-jit.cpp @@ -0,0 +1,1018 @@ +/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- + * vim: set ts=8 sts=4 et sw=4 tw=99: + * This Source Code Form is subject to the terms of the Mozilla Public + * License, v. 2.0. If a copy of the MPL was not distributed with this + * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ + +#include "mozilla/Atomics.h" + +#ifdef JS_CODEGEN_ARM +# include "jit/arm/Architecture-arm.h" +#endif +#include "jit/AtomicOperations.h" +#include "jit/IonTypes.h" +#include "jit/MacroAssembler.h" +#include "jit/RegisterSets.h" + +#include "jit/MacroAssembler-inl.h" + +using namespace js; +using namespace js::jit; + +// Assigned registers must follow these rules: +// +// - if they overlap the argument registers (for arguments we use) then they +// +// M M U U SSSS TTTTT +// ====\ MM MM U U S T /==== +// =====> M M M U U SSS T <===== +// ====/ M M U U S T \==== +// M M UUU SSSS T +// +// require no register movement, even for 64-bit registers. (If this becomes +// too complex to handle then we need to create an abstraction that uses the +// MoveResolver, see comments on bug 1394420.) +// +// - they should be volatile when possible so that we don't have to save and +// restore them. +// +// Note that the functions we're generating have a very limited number of +// signatures, and the register assignments need only work for these signatures. +// The signatures are these: +// +// () +// (ptr) +// (ptr, val/val64) +// (ptr, ptr) +// (ptr, val/val64, val/val64) +// +// It would be nice to avoid saving and restoring all the nonvolatile registers +// for all the operations, and instead save and restore only the registers used +// by each specific operation, but the amount of protocol needed to accomplish +// that probably does not pay for itself. + +#if defined(JS_CODEGEN_X64) + +// Selected registers match the argument registers exactly, and none of them +// overlap the result register. + +static const LiveRegisterSet AtomicNonVolatileRegs; + +static constexpr Register AtomicPtrReg = IntArgReg0; +static constexpr Register AtomicPtr2Reg = IntArgReg1; +static constexpr Register AtomicValReg = IntArgReg1; +static constexpr Register64 AtomicValReg64(IntArgReg1); +static constexpr Register AtomicVal2Reg = IntArgReg2; +static constexpr Register64 AtomicVal2Reg64(IntArgReg2); +static constexpr Register AtomicTemp = IntArgReg3; +static constexpr Register64 AtomicTemp64(IntArgReg3); + +#elif defined(JS_CODEGEN_ARM64) + +// Selected registers match the argument registers, except that the Ptr is not +// in IntArgReg0 so as not to conflict with the result register. + +static const LiveRegisterSet AtomicNonVolatileRegs; + +static constexpr Register AtomicPtrReg = IntArgReg4; +static constexpr Register AtomicPtr2Reg = IntArgReg1; +static constexpr Register AtomicValReg = IntArgReg1; +static constexpr Register64 AtomicValReg64(IntArgReg1); +static constexpr Register AtomicVal2Reg = IntArgReg2; +static constexpr Register64 AtomicVal2Reg64(IntArgReg2); +static constexpr Register AtomicTemp = IntArgReg3; +static constexpr Register64 AtomicTemp64(IntArgReg3); + +#elif defined(JS_CODEGEN_ARM) + +// Assigned registers except temp are disjoint from the argument registers, +// since accounting for both 32-bit and 64-bit arguments and constraints on the +// result register is much too messy. The temp is in an argument register since +// it won't be used until we've moved all arguments to other registers. + +static const LiveRegisterSet AtomicNonVolatileRegs = + LiveRegisterSet(GeneralRegisterSet((uint32_t(1) << Registers::r4) | + (uint32_t(1) << Registers::r5) | + (uint32_t(1) << Registers::r6) | + (uint32_t(1) << Registers::r7) | + (uint32_t(1) << Registers::r8)), + FloatRegisterSet(0)); + +static constexpr Register AtomicPtrReg = r8; +static constexpr Register AtomicPtr2Reg = r6; +static constexpr Register AtomicTemp = r3; +static constexpr Register AtomicValReg = r6; +static constexpr Register64 AtomicValReg64(r7, r6); +static constexpr Register AtomicVal2Reg = r4; +static constexpr Register64 AtomicVal2Reg64(r5, r4); + +#elif defined(JS_CODEGEN_X86) + +// There are no argument registers. + +static const LiveRegisterSet AtomicNonVolatileRegs = + LiveRegisterSet(GeneralRegisterSet((1 << X86Encoding::rbx) | + (1 << X86Encoding::rsi)), + FloatRegisterSet(0)); + +static constexpr Register AtomicPtrReg = esi; +static constexpr Register AtomicPtr2Reg = ebx; +static constexpr Register AtomicValReg = ebx; +static constexpr Register AtomicVal2Reg = ecx; +static constexpr Register AtomicTemp = edx; + +// 64-bit registers for cmpxchg8b. ValReg/Val2Reg/Temp are not used in this +// case. + +static constexpr Register64 AtomicValReg64(edx, eax); +static constexpr Register64 AtomicVal2Reg64(ecx, ebx); + +#else +# error "Not implemented - not a tier1 platform" +#endif + +// These are useful shorthands and hide the meaningless uint/int distinction. + +static constexpr Scalar::Type SIZE8 = Scalar::Uint8; +static constexpr Scalar::Type SIZE16 = Scalar::Uint16; +static constexpr Scalar::Type SIZE32 = Scalar::Uint32; +static constexpr Scalar::Type SIZE64 = Scalar::Int64; +#ifdef JS_64BIT +static constexpr Scalar::Type SIZEWORD = SIZE64; +#else +static constexpr Scalar::Type SIZEWORD = SIZE32; +#endif + +// A "block" is a sequence of bytes that is a reasonable quantum to copy to +// amortize call overhead when implementing memcpy and memmove. A block will +// not fit in registers on all platforms and copying it without using +// intermediate memory will therefore be sensitive to overlap. +// +// A "word" is an item that we can copy using only register intermediate storage +// on all platforms; words can be individually copied without worrying about +// overlap. +// +// Blocks and words can be aligned or unaligned; specific (generated) copying +// functions handle this in platform-specific ways. + +static constexpr size_t WORDSIZE = sizeof(uintptr_t); // Also see SIZEWORD above +static constexpr size_t BLOCKSIZE = 8 * WORDSIZE; // Must be a power of 2 + +static_assert(BLOCKSIZE % WORDSIZE == 0, "A block is an integral number of words"); + +static constexpr size_t WORDMASK = WORDSIZE - 1; +static constexpr size_t BLOCKMASK = BLOCKSIZE - 1; + +struct ArgIterator +{ + ABIArgGenerator abi; + unsigned argBase = 0; +}; + +static void GenGprArg(MacroAssembler& masm, MIRType t, ArgIterator* iter, + Register reg) { + MOZ_ASSERT(t == MIRType::Pointer || t == MIRType::Int32); + ABIArg arg = iter->abi.next(t); + switch (arg.kind()) { + case ABIArg::GPR: { + if (arg.gpr() != reg) { + masm.movePtr(arg.gpr(), reg); + } + break; + } + case ABIArg::Stack: { + Address src(masm.getStackPointer(), + iter->argBase + arg.offsetFromArgBase()); + masm.loadPtr(src, reg); + break; + } + default: { + MOZ_CRASH("Not possible"); + } + } +} + +static void GenGpr64Arg(MacroAssembler& masm, ArgIterator* iter, + Register64 reg) { + ABIArg arg = iter->abi.next(MIRType::Int64); + switch (arg.kind()) { + case ABIArg::GPR: { + if (arg.gpr64() != reg) { + masm.move64(arg.gpr64(), reg); + } + break; + } + case ABIArg::Stack: { + Address src(masm.getStackPointer(), + iter->argBase + arg.offsetFromArgBase()); +#ifdef JS_64BIT + masm.load64(src, reg); +#else + masm.load32(LowWord(src), reg.low); + masm.load32(HighWord(src), reg.high); +#endif + break; + } +#if defined(JS_CODEGEN_REGISTER_PAIR) + case ABIArg::GPR_PAIR: { + if (arg.gpr64() != reg) { + masm.move32(arg.oddGpr(), reg.high); + masm.move32(arg.evenGpr(), reg.low); + } + break; + } +#endif + default: { + MOZ_CRASH("Not possible"); + } + } +} + +static uint32_t GenPrologue(MacroAssembler& masm, ArgIterator* iter) { + masm.assumeUnreachable("Shouldn't get here"); + masm.flushBuffer(); + masm.haltingAlign(CodeAlignment); + masm.setFramePushed(0); + uint32_t start = masm.currentOffset(); + masm.PushRegsInMask(AtomicNonVolatileRegs); + iter->argBase = sizeof(void*) + masm.framePushed(); + return start; +} + +static void GenEpilogue(MacroAssembler& masm) { + masm.PopRegsInMask(AtomicNonVolatileRegs); + MOZ_ASSERT(masm.framePushed() == 0); +#if defined(JS_CODEGEN_ARM64) + masm.Ret(); +#elif defined(JS_CODEGEN_ARM) + masm.mov(lr, pc); +#else + masm.ret(); +#endif +} + +#ifndef JS_64BIT +static uint32_t GenNop(MacroAssembler& masm) { + ArgIterator iter; + uint32_t start = GenPrologue(masm, &iter); + GenEpilogue(masm); + return start; +} +#endif + +static uint32_t GenFenceSeqCst(MacroAssembler& masm) { + ArgIterator iter; + uint32_t start = GenPrologue(masm, &iter); + masm.memoryBarrier(MembarFull); + GenEpilogue(masm); + return start; +} + +static uint32_t GenLoad(MacroAssembler& masm, Scalar::Type size, + Synchronization sync) { + ArgIterator iter; + uint32_t start = GenPrologue(masm, &iter); + GenGprArg(masm, MIRType::Pointer, &iter, AtomicPtrReg); + + masm.memoryBarrier(sync.barrierBefore); + Address addr(AtomicPtrReg, 0); + switch (size) { + case SIZE8: + masm.load8ZeroExtend(addr, ReturnReg); + break; + case SIZE16: + masm.load16ZeroExtend(addr, ReturnReg); + break; + case SIZE32: + masm.load32(addr, ReturnReg); + break; + case SIZE64: +#if defined(JS_64BIT) + masm.load64(addr, ReturnReg64); + break; +#else + MOZ_CRASH("64-bit atomic load not available on this platform"); +#endif + default: + MOZ_CRASH("Unknown size"); + } + masm.memoryBarrier(sync.barrierAfter); + + GenEpilogue(masm); + return start; +} + +static uint32_t GenStore(MacroAssembler& masm, Scalar::Type size, + Synchronization sync) { + ArgIterator iter; + uint32_t start = GenPrologue(masm, &iter); + GenGprArg(masm, MIRType::Pointer, &iter, AtomicPtrReg); + + masm.memoryBarrier(sync.barrierBefore); + Address addr(AtomicPtrReg, 0); + switch (size) { + case SIZE8: + GenGprArg(masm, MIRType::Int32, &iter, AtomicValReg); + masm.store8(AtomicValReg, addr); + break; + case SIZE16: + GenGprArg(masm, MIRType::Int32, &iter, AtomicValReg); + masm.store16(AtomicValReg, addr); + break; + case SIZE32: + GenGprArg(masm, MIRType::Int32, &iter, AtomicValReg); + masm.store32(AtomicValReg, addr); + break; + case SIZE64: +#if defined(JS_64BIT) + GenGpr64Arg(masm, &iter, AtomicValReg64); + masm.store64(AtomicValReg64, addr); + break; +#else + MOZ_CRASH("64-bit atomic store not available on this platform"); +#endif + default: + MOZ_CRASH("Unknown size"); + } + masm.memoryBarrier(sync.barrierAfter); + + GenEpilogue(masm); + return start; +} + +enum class CopyDir { + DOWN, // Move data down, ie, iterate toward higher addresses + UP // The other way +}; + +static uint32_t GenCopy(MacroAssembler& masm, Scalar::Type size, + uint32_t unroll, CopyDir direction) { + ArgIterator iter; + uint32_t start = GenPrologue(masm, &iter); + + Register dest = AtomicPtrReg; + Register src = AtomicPtr2Reg; + + GenGprArg(masm, MIRType::Pointer, &iter, dest); + GenGprArg(masm, MIRType::Pointer, &iter, src); + + uint32_t offset = direction == CopyDir::DOWN ? 0 : unroll-1; + for (uint32_t i = 0; i < unroll; i++) { + switch (size) { + case SIZE8: + masm.load8ZeroExtend(Address(src, offset), AtomicTemp); + masm.store8(AtomicTemp, Address(dest, offset)); + break; + case SIZE16: + masm.load16ZeroExtend(Address(src, offset*2), AtomicTemp); + masm.store16(AtomicTemp, Address(dest, offset*2)); + break; + case SIZE32: + masm.load32(Address(src, offset*4), AtomicTemp); + masm.store32(AtomicTemp, Address(dest, offset*4)); + break; + case SIZE64: +#if defined(JS_64BIT) + masm.load64(Address(src, offset*8), AtomicTemp64); + masm.store64(AtomicTemp64, Address(dest, offset*8)); + break; +#else + MOZ_CRASH("64-bit atomic load/store not available on this platform"); +#endif + default: + MOZ_CRASH("Unknown size"); + } + offset += direction == CopyDir::DOWN ? 1 : -1; + } + + GenEpilogue(masm); + return start; +} + +static uint32_t GenCmpxchg(MacroAssembler& masm, Scalar::Type size, + Synchronization sync) { + ArgIterator iter; + uint32_t start = GenPrologue(masm, &iter); + GenGprArg(masm, MIRType::Pointer, &iter, AtomicPtrReg); + + Address addr(AtomicPtrReg, 0); + switch (size) { + case SIZE8: + case SIZE16: + case SIZE32: + GenGprArg(masm, MIRType::Int32, &iter, AtomicValReg); + GenGprArg(masm, MIRType::Int32, &iter, AtomicVal2Reg); + masm.compareExchange(size, sync, addr, AtomicValReg, AtomicVal2Reg, ReturnReg); + break; + case SIZE64: + GenGpr64Arg(masm, &iter, AtomicValReg64); + GenGpr64Arg(masm, &iter, AtomicVal2Reg64); +#if defined(JS_CODEGEN_X86) + MOZ_ASSERT(AtomicValReg64 == Register64(edx, eax)); + MOZ_ASSERT(AtomicVal2Reg64 == Register64(ecx, ebx)); + masm.lock_cmpxchg8b(edx, eax, ecx, ebx, Operand(addr)); + + MOZ_ASSERT(ReturnReg64 == Register64(edi, eax)); + masm.mov(edx, edi); +#else + masm.compareExchange64(sync, addr, AtomicValReg64, AtomicVal2Reg64, ReturnReg64); +#endif + break; + default: + MOZ_CRASH("Unknown size"); + } + + GenEpilogue(masm); + return start; +} + +static uint32_t GenExchange(MacroAssembler& masm, Scalar::Type size, + Synchronization sync) { + ArgIterator iter; + uint32_t start = GenPrologue(masm, &iter); + GenGprArg(masm, MIRType::Pointer, &iter, AtomicPtrReg); + + Address addr(AtomicPtrReg, 0); + switch (size) { + case SIZE8: + case SIZE16: + case SIZE32: + GenGprArg(masm, MIRType::Int32, &iter, AtomicValReg); + masm.atomicExchange(size, sync, addr, AtomicValReg, ReturnReg); + break; + case SIZE64: +#if defined(JS_64BIT) + GenGpr64Arg(masm, &iter, AtomicValReg64); + masm.atomicExchange64(sync, addr, AtomicValReg64, ReturnReg64); + break; +#else + MOZ_CRASH("64-bit atomic exchange not available on this platform"); +#endif + default: + MOZ_CRASH("Unknown size"); + } + + GenEpilogue(masm); + return start; +} + +static uint32_t +GenFetchOp(MacroAssembler& masm, Scalar::Type size, AtomicOp op, + Synchronization sync) { + ArgIterator iter; + uint32_t start = GenPrologue(masm, &iter); + GenGprArg(masm, MIRType::Pointer, &iter, AtomicPtrReg); + + Address addr(AtomicPtrReg, 0); + switch (size) { + case SIZE8: + case SIZE16: + case SIZE32: { +#if defined(JS_CODEGEN_X86) || defined(JS_CODEGEN_X64) + Register tmp = op == AtomicFetchAddOp || op == AtomicFetchSubOp + ? Register::Invalid() + : AtomicTemp; +#else + Register tmp = AtomicTemp; +#endif + GenGprArg(masm, MIRType::Int32, &iter, AtomicValReg); + masm.atomicFetchOp(size, sync, op, AtomicValReg, addr, tmp, ReturnReg); + break; + } + case SIZE64: { +#if defined(JS_64BIT) +# if defined(JS_CODEGEN_X64) + Register64 tmp = op == AtomicFetchAddOp || op == AtomicFetchSubOp + ? Register64::Invalid() + : AtomicTemp64; +# else + Register64 tmp = AtomicTemp64; +# endif + GenGpr64Arg(masm, &iter, AtomicValReg64); + masm.atomicFetchOp64(sync, op, AtomicValReg64, addr, tmp, ReturnReg64); + break; +#else + MOZ_CRASH("64-bit atomic fetchOp not available on this platform"); +#endif + } + default: + MOZ_CRASH("Unknown size"); + } + + GenEpilogue(masm); + return start; +} + +namespace js { +namespace jit { + +void (*AtomicFenceSeqCst)(); + +#ifndef JS_64BIT +void (*AtomicCompilerFence)(); +#endif + +uint8_t (*AtomicLoad8SeqCst)(const uint8_t* addr); +uint16_t (*AtomicLoad16SeqCst)(const uint16_t* addr); +uint32_t (*AtomicLoad32SeqCst)(const uint32_t* addr); +#ifdef JS_64BIT +uint64_t (*AtomicLoad64SeqCst)(const uint64_t* addr); +#endif + +uint8_t (*AtomicLoad8Unsynchronized)(const uint8_t* addr); +uint16_t (*AtomicLoad16Unsynchronized)(const uint16_t* addr); +uint32_t (*AtomicLoad32Unsynchronized)(const uint32_t* addr); +#ifdef JS_64BIT +uint64_t (*AtomicLoad64Unsynchronized)(const uint64_t* addr); +#endif + +uint8_t (*AtomicStore8SeqCst)(uint8_t* addr, uint8_t val); +uint16_t (*AtomicStore16SeqCst)(uint16_t* addr, uint16_t val); +uint32_t (*AtomicStore32SeqCst)(uint32_t* addr, uint32_t val); +#ifdef JS_64BIT +uint64_t (*AtomicStore64SeqCst)(uint64_t* addr, uint64_t val); +#endif + +uint8_t (*AtomicStore8Unsynchronized)(uint8_t* addr, uint8_t val); +uint16_t (*AtomicStore16Unsynchronized)(uint16_t* addr, uint16_t val); +uint32_t (*AtomicStore32Unsynchronized)(uint32_t* addr, uint32_t val); +#ifdef JS_64BIT +uint64_t (*AtomicStore64Unsynchronized)(uint64_t* addr, uint64_t val); +#endif + +// See the definitions of BLOCKSIZE and WORDSIZE earlier. The "unaligned" +// functions perform individual byte copies (and must always be "down" or "up"). +// The others ignore alignment issues, and thus either depend on unaligned +// accesses being OK or not being invoked on unaligned addresses. +// +// src and dest point to the lower addresses of the respective data areas +// irrespective of "up" or "down". + +static void (*AtomicCopyUnalignedBlockDownUnsynchronized)(uint8_t* dest, const uint8_t* src); +static void (*AtomicCopyUnalignedBlockUpUnsynchronized)(uint8_t* dest, const uint8_t* src); +static void (*AtomicCopyUnalignedWordDownUnsynchronized)(uint8_t* dest, const uint8_t* src); +static void (*AtomicCopyUnalignedWordUpUnsynchronized)(uint8_t* dest, const uint8_t* src); + +static void (*AtomicCopyBlockDownUnsynchronized)(uint8_t* dest, const uint8_t* src); +static void (*AtomicCopyBlockUpUnsynchronized)(uint8_t* dest, const uint8_t* src); +static void (*AtomicCopyWordUnsynchronized)(uint8_t* dest, const uint8_t* src); +static void (*AtomicCopyByteUnsynchronized)(uint8_t* dest, const uint8_t* src); + +uint8_t (*AtomicCmpXchg8SeqCst)(uint8_t* addr, uint8_t oldval, uint8_t newval); +uint16_t (*AtomicCmpXchg16SeqCst)(uint16_t* addr, uint16_t oldval, uint16_t newval); +uint32_t (*AtomicCmpXchg32SeqCst)(uint32_t* addr, uint32_t oldval, uint32_t newval); +uint64_t (*AtomicCmpXchg64SeqCst)(uint64_t* addr, uint64_t oldval, uint64_t newval); + +uint8_t (*AtomicExchange8SeqCst)(uint8_t* addr, uint8_t val); +uint16_t (*AtomicExchange16SeqCst)(uint16_t* addr, uint16_t val); +uint32_t (*AtomicExchange32SeqCst)(uint32_t* addr, uint32_t val); +#ifdef JS_64BIT +uint64_t (*AtomicExchange64SeqCst)(uint64_t* addr, uint64_t val); +#endif + +uint8_t (*AtomicAdd8SeqCst)(uint8_t* addr, uint8_t val); +uint16_t (*AtomicAdd16SeqCst)(uint16_t* addr, uint16_t val); +uint32_t (*AtomicAdd32SeqCst)(uint32_t* addr, uint32_t val); +#ifdef JS_64BIT +uint64_t (*AtomicAdd64SeqCst)(uint64_t* addr, uint64_t val); +#endif + +uint8_t (*AtomicAnd8SeqCst)(uint8_t* addr, uint8_t val); +uint16_t (*AtomicAnd16SeqCst)(uint16_t* addr, uint16_t val); +uint32_t (*AtomicAnd32SeqCst)(uint32_t* addr, uint32_t val); +#ifdef JS_64BIT +uint64_t (*AtomicAnd64SeqCst)(uint64_t* addr, uint64_t val); +#endif + +uint8_t (*AtomicOr8SeqCst)(uint8_t* addr, uint8_t val); +uint16_t (*AtomicOr16SeqCst)(uint16_t* addr, uint16_t val); +uint32_t (*AtomicOr32SeqCst)(uint32_t* addr, uint32_t val); +#ifdef JS_64BIT +uint64_t (*AtomicOr64SeqCst)(uint64_t* addr, uint64_t val); +#endif + +uint8_t (*AtomicXor8SeqCst)(uint8_t* addr, uint8_t val); +uint16_t (*AtomicXor16SeqCst)(uint16_t* addr, uint16_t val); +uint32_t (*AtomicXor32SeqCst)(uint32_t* addr, uint32_t val); +#ifdef JS_64BIT +uint64_t (*AtomicXor64SeqCst)(uint64_t* addr, uint64_t val); +#endif + +static bool UnalignedAccessesAreOK() { +#ifdef DEBUG + const char* flag = getenv("JS_NO_UNALIGNED_MEMCPY"); + if (flag && *flag == '1') + return false; +#endif +#if defined(JS_CODEGEN_X86) || defined(JS_CODEGEN_X64) + return true; +#elif defined(JS_CODEGEN_ARM) + return !HasAlignmentFault(); +#elif defined(JS_CODEGEN_ARM64) + // This is not necessarily true but it's the best guess right now. + return true; +#else + return false; +#endif +} + +void AtomicMemcpyDownUnsynchronized(uint8_t* dest, const uint8_t* src, + size_t nbytes) { + const uint8_t* lim = src + nbytes; + + // Set up bulk copying. The cases are ordered the way they are on the + // assumption that if we can achieve aligned copies even with a little + // preprocessing then that is better than unaligned copying on a platform + // that supports it. + + if (nbytes >= WORDSIZE) { + void (*copyBlock)(uint8_t* dest, const uint8_t* src); + void (*copyWord)(uint8_t* dest, const uint8_t* src); + + if (((uintptr_t(dest) ^ uintptr_t(src)) & WORDMASK) == 0) { + const uint8_t* cutoff = (const uint8_t*)JS_ROUNDUP(uintptr_t(src), + WORDSIZE); + MOZ_ASSERT(cutoff <= lim); // because nbytes >= WORDSIZE + while (src < cutoff) { + AtomicCopyByteUnsynchronized(dest++, src++); + } + copyBlock = AtomicCopyBlockDownUnsynchronized; + copyWord = AtomicCopyWordUnsynchronized; + } + else if (UnalignedAccessesAreOK()) { + copyBlock = AtomicCopyBlockDownUnsynchronized; + copyWord = AtomicCopyWordUnsynchronized; + } else { + copyBlock = AtomicCopyUnalignedBlockDownUnsynchronized; + copyWord = AtomicCopyUnalignedWordDownUnsynchronized; + } + + // Bulk copy, first larger blocks and then individual words. + + const uint8_t* blocklim = src + ((lim - src) & ~BLOCKMASK); + while (src < blocklim) { + copyBlock(dest, src); + dest += BLOCKSIZE; + src += BLOCKSIZE; + } + + const uint8_t* wordlim = src + ((lim - src) & ~WORDMASK); + while (src < wordlim) { + copyWord(dest, src); + dest += WORDSIZE; + src += WORDSIZE; + } + } + + // Byte copy any remaining tail. + + while (src < lim) { + AtomicCopyByteUnsynchronized(dest++, src++); + } +} + +void AtomicMemcpyUpUnsynchronized(uint8_t* dest, const uint8_t* src, + size_t nbytes) { + const uint8_t* lim = src; + + src += nbytes; + dest += nbytes; + + if (nbytes >= WORDSIZE) { + void (*copyBlock)(uint8_t* dest, const uint8_t* src); + void (*copyWord)(uint8_t* dest, const uint8_t* src); + + if (((uintptr_t(dest) ^ uintptr_t(src)) & WORDMASK) == 0) { + const uint8_t* cutoff = (const uint8_t*)(uintptr_t(src) & ~WORDMASK); + MOZ_ASSERT(cutoff >= lim); // Because nbytes >= WORDSIZE + while (src > cutoff) { + AtomicCopyByteUnsynchronized(--dest, --src); + } + copyBlock = AtomicCopyBlockUpUnsynchronized; + copyWord = AtomicCopyWordUnsynchronized; + } + else if (UnalignedAccessesAreOK()) { + copyBlock = AtomicCopyBlockUpUnsynchronized; + copyWord = AtomicCopyWordUnsynchronized; + } else { + copyBlock = AtomicCopyUnalignedBlockUpUnsynchronized; + copyWord = AtomicCopyUnalignedWordUpUnsynchronized; + } + + const uint8_t* blocklim = src - ((src - lim) & ~BLOCKMASK); + while (src > blocklim) { + dest -= BLOCKSIZE; + src -= BLOCKSIZE; + copyBlock(dest, src); + } + + const uint8_t* wordlim = src - ((src - lim) & ~WORDMASK); + while (src > wordlim) { + dest -= WORDSIZE; + src -= WORDSIZE; + copyWord(dest, src); + } + } + + while (src > lim) { + AtomicCopyByteUnsynchronized(--dest, --src); + } +} + +// These will be read and written only by the main thread during startup and +// shutdown. + +static uint8_t* codeSegment; +static uint32_t codeSegmentSize; + +bool InitializeJittedAtomics() { + // We should only initialize once. + MOZ_ASSERT(!codeSegment); + + LifoAlloc lifo(4096); + TempAllocator alloc(&lifo); + JitContext jcx(&alloc); + StackMacroAssembler masm; + + uint32_t fenceSeqCst = GenFenceSeqCst(masm); + +#ifndef JS_64BIT + uint32_t nop = GenNop(masm); +#endif + + Synchronization Full = Synchronization::Full(); + Synchronization None = Synchronization::None(); + + uint32_t load8SeqCst = GenLoad(masm, SIZE8, Full); + uint32_t load16SeqCst = GenLoad(masm, SIZE16, Full); + uint32_t load32SeqCst = GenLoad(masm, SIZE32, Full); +#ifdef JS_64BIT + uint32_t load64SeqCst = GenLoad(masm, SIZE64, Full); +#endif + + uint32_t load8Unsynchronized = GenLoad(masm, SIZE8, None); + uint32_t load16Unsynchronized = GenLoad(masm, SIZE16, None); + uint32_t load32Unsynchronized = GenLoad(masm, SIZE32, None); +#ifdef JS_64BIT + uint32_t load64Unsynchronized = GenLoad(masm, SIZE64, None); +#endif + + uint32_t store8SeqCst = GenStore(masm, SIZE8, Full); + uint32_t store16SeqCst = GenStore(masm, SIZE16, Full); + uint32_t store32SeqCst = GenStore(masm, SIZE32, Full); +#ifdef JS_64BIT + uint32_t store64SeqCst = GenStore(masm, SIZE64, Full); +#endif + + uint32_t store8Unsynchronized = GenStore(masm, SIZE8, None); + uint32_t store16Unsynchronized = GenStore(masm, SIZE16, None); + uint32_t store32Unsynchronized = GenStore(masm, SIZE32, None); +#ifdef JS_64BIT + uint32_t store64Unsynchronized = GenStore(masm, SIZE64, None); +#endif + + uint32_t copyUnalignedBlockDownUnsynchronized = + GenCopy(masm, SIZE8, BLOCKSIZE, CopyDir::DOWN); + uint32_t copyUnalignedBlockUpUnsynchronized = + GenCopy(masm, SIZE8, BLOCKSIZE, CopyDir::UP); + uint32_t copyUnalignedWordDownUnsynchronized = + GenCopy(masm, SIZE8, WORDSIZE, CopyDir::DOWN); + uint32_t copyUnalignedWordUpUnsynchronized = + GenCopy(masm, SIZE8, WORDSIZE, CopyDir::UP); + + uint32_t copyBlockDownUnsynchronized = + GenCopy(masm, SIZEWORD, BLOCKSIZE/WORDSIZE, CopyDir::DOWN); + uint32_t copyBlockUpUnsynchronized = + GenCopy(masm, SIZEWORD, BLOCKSIZE/WORDSIZE, CopyDir::UP); + uint32_t copyWordUnsynchronized = GenCopy(masm, SIZEWORD, 1, CopyDir::DOWN); + uint32_t copyByteUnsynchronized = GenCopy(masm, SIZE8, 1, CopyDir::DOWN); + + uint32_t cmpxchg8SeqCst = GenCmpxchg(masm, SIZE8, Full); + uint32_t cmpxchg16SeqCst = GenCmpxchg(masm, SIZE16, Full); + uint32_t cmpxchg32SeqCst = GenCmpxchg(masm, SIZE32, Full); + uint32_t cmpxchg64SeqCst = GenCmpxchg(masm, SIZE64, Full); + + uint32_t exchange8SeqCst = GenExchange(masm, SIZE8, Full); + uint32_t exchange16SeqCst = GenExchange(masm, SIZE16, Full); + uint32_t exchange32SeqCst = GenExchange(masm, SIZE32, Full); +#ifdef JS_64BIT + uint32_t exchange64SeqCst = GenExchange(masm, SIZE64, Full); +#endif + + uint32_t add8SeqCst = GenFetchOp(masm, SIZE8, AtomicFetchAddOp, Full); + uint32_t add16SeqCst = GenFetchOp(masm, SIZE16, AtomicFetchAddOp, Full); + uint32_t add32SeqCst = GenFetchOp(masm, SIZE32, AtomicFetchAddOp, Full); +#ifdef JS_64BIT + uint32_t add64SeqCst = GenFetchOp(masm, SIZE64, AtomicFetchAddOp, Full); +#endif + + uint32_t and8SeqCst = GenFetchOp(masm, SIZE8, AtomicFetchAndOp, Full); + uint32_t and16SeqCst = GenFetchOp(masm, SIZE16, AtomicFetchAndOp, Full); + uint32_t and32SeqCst = GenFetchOp(masm, SIZE32, AtomicFetchAndOp, Full); +#ifdef JS_64BIT + uint32_t and64SeqCst = GenFetchOp(masm, SIZE64, AtomicFetchAndOp, Full); +#endif + + uint32_t or8SeqCst = GenFetchOp(masm, SIZE8, AtomicFetchOrOp, Full); + uint32_t or16SeqCst = GenFetchOp(masm, SIZE16, AtomicFetchOrOp, Full); + uint32_t or32SeqCst = GenFetchOp(masm, SIZE32, AtomicFetchOrOp, Full); +#ifdef JS_64BIT + uint32_t or64SeqCst = GenFetchOp(masm, SIZE64, AtomicFetchOrOp, Full); +#endif + + uint32_t xor8SeqCst = GenFetchOp(masm, SIZE8, AtomicFetchXorOp, Full); + uint32_t xor16SeqCst = GenFetchOp(masm, SIZE16, AtomicFetchXorOp, Full); + uint32_t xor32SeqCst = GenFetchOp(masm, SIZE32, AtomicFetchXorOp, Full); +#ifdef JS_64BIT + uint32_t xor64SeqCst = GenFetchOp(masm, SIZE64, AtomicFetchXorOp, Full); +#endif + + masm.finish(); + if (masm.oom()) { + return false; + } + + // Allocate executable memory. + uint32_t codeLength = masm.bytesNeeded(); + size_t roundedCodeLength = JS_ROUNDUP(codeLength, ExecutableCodePageSize); + uint8_t* code = + (uint8_t*)AllocateExecutableMemory(roundedCodeLength, + ProtectionSetting::Writable, + MemCheckKind::MakeUndefined); + if (!code) { + return false; + } + + // Zero the padding. + memset(code + codeLength, 0, roundedCodeLength - codeLength); + + // Copy the code into place but do not flush, as the flush path requires a + // JSContext* we do not have. + masm.executableCopy(code, /* flushICache = */ false); + + // Flush the icache using a primitive method. + ExecutableAllocator::cacheFlush(code, roundedCodeLength); + + // Reprotect the whole region to avoid having separate RW and RX mappings. + if (!ExecutableAllocator::makeExecutable(code, roundedCodeLength)) { + DeallocateExecutableMemory(code, roundedCodeLength); + return false; + } + + // Create the function pointers. + + AtomicFenceSeqCst = (void(*)())(code + fenceSeqCst); + +#ifndef JS_64BIT + AtomicCompilerFence = (void(*)())(code + nop); +#endif + + AtomicLoad8SeqCst = (uint8_t(*)(const uint8_t* addr))(code + load8SeqCst); + AtomicLoad16SeqCst = (uint16_t(*)(const uint16_t* addr))(code + load16SeqCst); + AtomicLoad32SeqCst = (uint32_t(*)(const uint32_t* addr))(code + load32SeqCst); +#ifdef JS_64BIT + AtomicLoad64SeqCst = (uint64_t(*)(const uint64_t* addr))(code + load64SeqCst); +#endif + + AtomicLoad8Unsynchronized = + (uint8_t(*)(const uint8_t* addr))(code + load8Unsynchronized); + AtomicLoad16Unsynchronized = + (uint16_t(*)(const uint16_t* addr))(code + load16Unsynchronized); + AtomicLoad32Unsynchronized = + (uint32_t(*)(const uint32_t* addr))(code + load32Unsynchronized); +#ifdef JS_64BIT + AtomicLoad64Unsynchronized = + (uint64_t(*)(const uint64_t* addr))(code + load64Unsynchronized); +#endif + + AtomicStore8SeqCst = + (uint8_t(*)(uint8_t* addr, uint8_t val))(code + store8SeqCst); + AtomicStore16SeqCst = + (uint16_t(*)(uint16_t* addr, uint16_t val))(code + store16SeqCst); + AtomicStore32SeqCst = + (uint32_t(*)(uint32_t* addr, uint32_t val))(code + store32SeqCst); +#ifdef JS_64BIT + AtomicStore64SeqCst = + (uint64_t(*)(uint64_t* addr, uint64_t val))(code + store64SeqCst); +#endif + + AtomicStore8Unsynchronized = + (uint8_t(*)(uint8_t* addr, uint8_t val))(code + store8Unsynchronized); + AtomicStore16Unsynchronized = + (uint16_t(*)(uint16_t* addr, uint16_t val))(code + store16Unsynchronized); + AtomicStore32Unsynchronized = + (uint32_t(*)(uint32_t* addr, uint32_t val))(code + store32Unsynchronized); +#ifdef JS_64BIT + AtomicStore64Unsynchronized = + (uint64_t(*)(uint64_t* addr, uint64_t val))(code + store64Unsynchronized); +#endif + + AtomicCopyUnalignedBlockDownUnsynchronized = + (void(*)(uint8_t* dest, const uint8_t* src))( + code + copyUnalignedBlockDownUnsynchronized); + AtomicCopyUnalignedBlockUpUnsynchronized = + (void(*)(uint8_t* dest, const uint8_t* src))( + code + copyUnalignedBlockUpUnsynchronized); + AtomicCopyUnalignedWordDownUnsynchronized = + (void(*)(uint8_t* dest, const uint8_t* src))( + code + copyUnalignedWordDownUnsynchronized); + AtomicCopyUnalignedWordUpUnsynchronized = + (void(*)(uint8_t* dest, const uint8_t* src))( + code + copyUnalignedWordUpUnsynchronized); + + AtomicCopyBlockDownUnsynchronized = + (void(*)(uint8_t* dest, const uint8_t* src))( + code + copyBlockDownUnsynchronized); + AtomicCopyBlockUpUnsynchronized = + (void(*)(uint8_t* dest, const uint8_t* src))( + code + copyBlockUpUnsynchronized); + AtomicCopyWordUnsynchronized = + (void(*)(uint8_t* dest, const uint8_t* src))(code + copyWordUnsynchronized); + AtomicCopyByteUnsynchronized = + (void(*)(uint8_t* dest, const uint8_t* src))(code + copyByteUnsynchronized); + + AtomicCmpXchg8SeqCst = + (uint8_t(*)(uint8_t* addr, uint8_t oldval, uint8_t newval))( + code + cmpxchg8SeqCst); + AtomicCmpXchg16SeqCst = + (uint16_t(*)(uint16_t* addr, uint16_t oldval, uint16_t newval))( + code + cmpxchg16SeqCst); + AtomicCmpXchg32SeqCst = + (uint32_t(*)(uint32_t* addr, uint32_t oldval, uint32_t newval))( + code + cmpxchg32SeqCst); + AtomicCmpXchg64SeqCst = + (uint64_t(*)(uint64_t* addr, uint64_t oldval, uint64_t newval))( + code + cmpxchg64SeqCst); + + AtomicExchange8SeqCst = (uint8_t(*)(uint8_t* addr, uint8_t val))( + code + exchange8SeqCst); + AtomicExchange16SeqCst = (uint16_t(*)(uint16_t* addr, uint16_t val))( + code + exchange16SeqCst); + AtomicExchange32SeqCst = (uint32_t(*)(uint32_t* addr, uint32_t val))( + code + exchange32SeqCst); +#ifdef JS_64BIT + AtomicExchange64SeqCst = (uint64_t(*)(uint64_t* addr, uint64_t val))( + code + exchange64SeqCst); +#endif + + AtomicAdd8SeqCst = + (uint8_t(*)(uint8_t* addr, uint8_t val))(code + add8SeqCst); + AtomicAdd16SeqCst = + (uint16_t(*)(uint16_t* addr, uint16_t val))(code + add16SeqCst); + AtomicAdd32SeqCst = + (uint32_t(*)(uint32_t* addr, uint32_t val))(code + add32SeqCst); +#ifdef JS_64BIT + AtomicAdd64SeqCst = + (uint64_t(*)(uint64_t* addr, uint64_t val))(code + add64SeqCst); +#endif + + AtomicAnd8SeqCst = + (uint8_t(*)(uint8_t* addr, uint8_t val))(code + and8SeqCst); + AtomicAnd16SeqCst = + (uint16_t(*)(uint16_t* addr, uint16_t val))(code + and16SeqCst); + AtomicAnd32SeqCst = + (uint32_t(*)(uint32_t* addr, uint32_t val))(code + and32SeqCst); +#ifdef JS_64BIT + AtomicAnd64SeqCst = + (uint64_t(*)(uint64_t* addr, uint64_t val))(code + and64SeqCst); +#endif + + AtomicOr8SeqCst = + (uint8_t(*)(uint8_t* addr, uint8_t val))(code + or8SeqCst); + AtomicOr16SeqCst = + (uint16_t(*)(uint16_t* addr, uint16_t val))(code + or16SeqCst); + AtomicOr32SeqCst = + (uint32_t(*)(uint32_t* addr, uint32_t val))(code + or32SeqCst); +#ifdef JS_64BIT + AtomicOr64SeqCst = + (uint64_t(*)(uint64_t* addr, uint64_t val))(code + or64SeqCst); +#endif + + AtomicXor8SeqCst = + (uint8_t(*)(uint8_t* addr, uint8_t val))(code + xor8SeqCst); + AtomicXor16SeqCst = + (uint16_t(*)(uint16_t* addr, uint16_t val))(code + xor16SeqCst); + AtomicXor32SeqCst = + (uint32_t(*)(uint32_t* addr, uint32_t val))(code + xor32SeqCst); +#ifdef JS_64BIT + AtomicXor64SeqCst = + (uint64_t(*)(uint64_t* addr, uint64_t val))(code + xor64SeqCst); +#endif + + codeSegment = code; + codeSegmentSize = roundedCodeLength; + + return true; +} + +void ShutDownJittedAtomics() { + // Must have been initialized. + MOZ_ASSERT(codeSegment); + + DeallocateExecutableMemory(codeSegment, codeSegmentSize); + codeSegment = nullptr; + codeSegmentSize = 0; +} + +} // jit +} // js diff --git a/js/src/jit/shared/AtomicOperations-shared-jit.h b/js/src/jit/shared/AtomicOperations-shared-jit.h new file mode 100644 index 0000000000000..5f9c54557e585 --- /dev/null +++ b/js/src/jit/shared/AtomicOperations-shared-jit.h @@ -0,0 +1,605 @@ +/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- + * vim: set ts=8 sts=4 et sw=4 tw=99: + * This Source Code Form is subject to the terms of the Mozilla Public + * License, v. 2.0. If a copy of the MPL was not distributed with this + * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ + +/* For overall documentation, see jit/AtomicOperations.h. + * + * NOTE CAREFULLY: This file is only applicable when we have configured a JIT + * and the JIT is for the same architecture that we're compiling the shell for. + * Simulators must use a different mechanism. + * + * See comments before the include nest near the end of jit/AtomicOperations.h + * if you didn't understand that. + */ + +#ifndef jit_shared_AtomicOperations_shared_jit_h +#define jit_shared_AtomicOperations_shared_jit_h + +#include "mozilla/Assertions.h" +#include "mozilla/Types.h" + +#include "jsapi.h" + +#include "vm/ArrayBufferObject.h" + +namespace js { +namespace jit { + +// The function pointers in this section all point to jitted code. +// +// On 32-bit systems we assume for simplicity's sake that we don't have any +// 64-bit atomic operations except cmpxchg (this is a concession to x86 but it's +// not a hardship). On 32-bit systems we therefore implement other 64-bit +// atomic operations in terms of cmpxchg along with some C++ code and a local +// reordering fence to prevent other loads and stores from being intermingled +// with operations in the implementation of the atomic. + +// `fence` performs a full memory barrier. +extern void (*AtomicFenceSeqCst)(); + +#ifndef JS_64BIT +// `compiler_fence` erects a reordering boundary for operations on the current +// thread. We use it to prevent the compiler from reordering loads and stores +// inside larger primitives that are synthesized from cmpxchg. +extern void (*AtomicCompilerFence)(); +#endif + +extern uint8_t (*AtomicLoad8SeqCst)(const uint8_t* addr); +extern uint16_t (*AtomicLoad16SeqCst)(const uint16_t* addr); +extern uint32_t (*AtomicLoad32SeqCst)(const uint32_t* addr); +#ifdef JS_64BIT +extern uint64_t (*AtomicLoad64SeqCst)(const uint64_t* addr); +#endif + +// These are access-atomic up to sizeof(uintptr_t). +extern uint8_t (*AtomicLoad8Unsynchronized)(const uint8_t* addr); +extern uint16_t (*AtomicLoad16Unsynchronized)(const uint16_t* addr); +extern uint32_t (*AtomicLoad32Unsynchronized)(const uint32_t* addr); +#ifdef JS_64BIT +extern uint64_t (*AtomicLoad64Unsynchronized)(const uint64_t* addr); +#endif + +extern uint8_t (*AtomicStore8SeqCst)(uint8_t* addr, uint8_t val); +extern uint16_t (*AtomicStore16SeqCst)(uint16_t* addr, uint16_t val); +extern uint32_t (*AtomicStore32SeqCst)(uint32_t* addr, uint32_t val); +#ifdef JS_64BIT +extern uint64_t (*AtomicStore64SeqCst)(uint64_t* addr, uint64_t val); +#endif + +// These are access-atomic up to sizeof(uintptr_t). +extern uint8_t (*AtomicStore8Unsynchronized)(uint8_t* addr, uint8_t val); +extern uint16_t (*AtomicStore16Unsynchronized)(uint16_t* addr, uint16_t val); +extern uint32_t (*AtomicStore32Unsynchronized)(uint32_t* addr, uint32_t val); +#ifdef JS_64BIT +extern uint64_t (*AtomicStore64Unsynchronized)(uint64_t* addr, uint64_t val); +#endif + +// `exchange` takes a cell address and a value. It stores it in the cell and +// returns the value previously in the cell. +extern uint8_t (*AtomicExchange8SeqCst)(uint8_t* addr, uint8_t val); +extern uint16_t (*AtomicExchange16SeqCst)(uint16_t* addr, uint16_t val); +extern uint32_t (*AtomicExchange32SeqCst)(uint32_t* addr, uint32_t val); +#ifdef JS_64BIT +extern uint64_t (*AtomicExchange64SeqCst)(uint64_t* addr, uint64_t val); +#endif + +// `add` adds a value atomically to the cell and returns the old value in the +// cell. (There is no `sub`; just add the negated value.) +extern uint8_t (*AtomicAdd8SeqCst)(uint8_t* addr, uint8_t val); +extern uint16_t (*AtomicAdd16SeqCst)(uint16_t* addr, uint16_t val); +extern uint32_t (*AtomicAdd32SeqCst)(uint32_t* addr, uint32_t val); +#ifdef JS_64BIT +extern uint64_t (*AtomicAdd64SeqCst)(uint64_t* addr, uint64_t val); +#endif + +// `and` bitwise-ands a value atomically into the cell and returns the old value +// in the cell. +extern uint8_t (*AtomicAnd8SeqCst)(uint8_t* addr, uint8_t val); +extern uint16_t (*AtomicAnd16SeqCst)(uint16_t* addr, uint16_t val); +extern uint32_t (*AtomicAnd32SeqCst)(uint32_t* addr, uint32_t val); +#ifdef JS_64BIT +extern uint64_t (*AtomicAnd64SeqCst)(uint64_t* addr, uint64_t val); +#endif + +// `or` bitwise-ors a value atomically into the cell and returns the old value +// in the cell. +extern uint8_t (*AtomicOr8SeqCst)(uint8_t* addr, uint8_t val); +extern uint16_t (*AtomicOr16SeqCst)(uint16_t* addr, uint16_t val); +extern uint32_t (*AtomicOr32SeqCst)(uint32_t* addr, uint32_t val); +#ifdef JS_64BIT +extern uint64_t (*AtomicOr64SeqCst)(uint64_t* addr, uint64_t val); +#endif + +// `xor` bitwise-xors a value atomically into the cell and returns the old value +// in the cell. +extern uint8_t (*AtomicXor8SeqCst)(uint8_t* addr, uint8_t val); +extern uint16_t (*AtomicXor16SeqCst)(uint16_t* addr, uint16_t val); +extern uint32_t (*AtomicXor32SeqCst)(uint32_t* addr, uint32_t val); +#ifdef JS_64BIT +extern uint64_t (*AtomicXor64SeqCst)(uint64_t* addr, uint64_t val); +#endif + +// `cmpxchg` takes a cell address, an expected value and a replacement value. +// If the value in the cell equals the expected value then the replacement value +// is stored in the cell. It always returns the value previously in the cell. +extern uint8_t (*AtomicCmpXchg8SeqCst)(uint8_t* addr, uint8_t oldval, uint8_t newval); +extern uint16_t (*AtomicCmpXchg16SeqCst)(uint16_t* addr, uint16_t oldval, uint16_t newval); +extern uint32_t (*AtomicCmpXchg32SeqCst)(uint32_t* addr, uint32_t oldval, uint32_t newval); +extern uint64_t (*AtomicCmpXchg64SeqCst)(uint64_t* addr, uint64_t oldval, uint64_t newval); + +// `...MemcpyDown` moves bytes toward lower addresses in memory: dest <= src. +// `...MemcpyUp` moves bytes toward higher addresses in memory: dest >= src. +extern void AtomicMemcpyDownUnsynchronized(uint8_t* dest, const uint8_t* src, size_t nbytes); +extern void AtomicMemcpyUpUnsynchronized(uint8_t* dest, const uint8_t* src, size_t nbytes); + +} } + +inline bool js::jit::AtomicOperations::hasAtomic8() { + return true; +} + +inline bool js::jit::AtomicOperations::isLockfree8() { + return true; +} + +inline void +js::jit::AtomicOperations::fenceSeqCst() { + AtomicFenceSeqCst(); +} + +#define JIT_LOADOP(T, U, loadop) \ + template<> inline T \ + AtomicOperations::loadSeqCst(T* addr) { \ + JS::AutoSuppressGCAnalysis nogc; \ + return (T)loadop((U*)addr); \ + } + +#ifndef JS_64BIT +# define JIT_LOADOP_CAS(T) \ + template<> \ + inline T \ + AtomicOperations::loadSeqCst(T* addr) { \ + JS::AutoSuppressGCAnalysis nogc; \ + AtomicCompilerFence(); \ + return (T)AtomicCmpXchg64SeqCst((uint64_t*)addr, 0, 0); \ + } +#endif // !JS_64BIT + +namespace js { +namespace jit { + +JIT_LOADOP(int8_t, uint8_t, AtomicLoad8SeqCst) +JIT_LOADOP(uint8_t, uint8_t, AtomicLoad8SeqCst) +JIT_LOADOP(int16_t, uint16_t, AtomicLoad16SeqCst) +JIT_LOADOP(uint16_t, uint16_t, AtomicLoad16SeqCst) +JIT_LOADOP(int32_t, uint32_t, AtomicLoad32SeqCst) +JIT_LOADOP(uint32_t, uint32_t, AtomicLoad32SeqCst) + +#ifdef JIT_LOADOP_CAS +JIT_LOADOP_CAS(int64_t) +JIT_LOADOP_CAS(uint64_t) +#else +JIT_LOADOP(int64_t, uint64_t, AtomicLoad64SeqCst) +JIT_LOADOP(uint64_t, uint64_t, AtomicLoad64SeqCst) +#endif + +}} + +#undef JIT_LOADOP +#undef JIT_LOADOP_CAS + +#define JIT_STOREOP(T, U, storeop) \ + template<> inline void \ + AtomicOperations::storeSeqCst(T* addr, T val) { \ + JS::AutoSuppressGCAnalysis nogc; \ + storeop((U*)addr, val); \ + } + +#ifndef JS_64BIT +# define JIT_STOREOP_CAS(T) \ + template<> \ + inline void \ + AtomicOperations::storeSeqCst(T* addr, T val) { \ + JS::AutoSuppressGCAnalysis nogc; \ + AtomicCompilerFence(); \ + T oldval = *addr; /* good initial approximation */ \ + for (;;) { \ + T nextval = (T)AtomicCmpXchg64SeqCst((uint64_t*)addr, \ + (uint64_t)oldval, \ + (uint64_t)val); \ + if (nextval == oldval) { \ + break; \ + } \ + oldval = nextval; \ + } \ + AtomicCompilerFence(); \ + } +#endif // !JS_64BIT + +namespace js { +namespace jit { + +JIT_STOREOP(int8_t, uint8_t, AtomicStore8SeqCst) +JIT_STOREOP(uint8_t, uint8_t, AtomicStore8SeqCst) +JIT_STOREOP(int16_t, uint16_t, AtomicStore16SeqCst) +JIT_STOREOP(uint16_t, uint16_t, AtomicStore16SeqCst) +JIT_STOREOP(int32_t, uint32_t, AtomicStore32SeqCst) +JIT_STOREOP(uint32_t, uint32_t, AtomicStore32SeqCst) + +#ifdef JIT_STOREOP_CAS +JIT_STOREOP_CAS(int64_t) +JIT_STOREOP_CAS(uint64_t) +#else +JIT_STOREOP(int64_t, uint64_t, AtomicStore64SeqCst) +JIT_STOREOP(uint64_t, uint64_t, AtomicStore64SeqCst) +#endif + +}} + +#undef JIT_STOREOP +#undef JIT_STOREOP_CAS + +#define JIT_EXCHANGEOP(T, U, xchgop) \ + template<> inline T \ + AtomicOperations::exchangeSeqCst(T* addr, T val) { \ + JS::AutoSuppressGCAnalysis nogc; \ + return (T)xchgop((U*)addr, (U)val); \ + } + +#ifndef JS_64BIT +# define JIT_EXCHANGEOP_CAS(T) \ + template<> inline T \ + AtomicOperations::exchangeSeqCst(T* addr, T val) { \ + JS::AutoSuppressGCAnalysis nogc; \ + AtomicCompilerFence(); \ + T oldval = *addr; \ + for (;;) { \ + T nextval = (T)AtomicCmpXchg64SeqCst((uint64_t*)addr, \ + (uint64_t)oldval, \ + (uint64_t)val); \ + if (nextval == oldval) { \ + break; \ + } \ + oldval = nextval; \ + } \ + AtomicCompilerFence(); \ + return oldval; \ + } +#endif // !JS_64BIT + +namespace js { +namespace jit { + +JIT_EXCHANGEOP(int8_t, uint8_t, AtomicExchange8SeqCst) +JIT_EXCHANGEOP(uint8_t, uint8_t, AtomicExchange8SeqCst) +JIT_EXCHANGEOP(int16_t, uint16_t, AtomicExchange16SeqCst) +JIT_EXCHANGEOP(uint16_t, uint16_t, AtomicExchange16SeqCst) +JIT_EXCHANGEOP(int32_t, uint32_t, AtomicExchange32SeqCst) +JIT_EXCHANGEOP(uint32_t, uint32_t, AtomicExchange32SeqCst) + +#ifdef JIT_EXCHANGEOP_CAS +JIT_EXCHANGEOP_CAS(int64_t) +JIT_EXCHANGEOP_CAS(uint64_t) +#else +JIT_EXCHANGEOP(int64_t, uint64_t, AtomicExchange64SeqCst) +JIT_EXCHANGEOP(uint64_t, uint64_t, AtomicExchange64SeqCst) +#endif + +}} + +#undef JIT_EXCHANGEOP +#undef JIT_EXCHANGEOP_CAS + +#define JIT_CAS(T, U, cmpxchg) \ + template<> inline T \ + AtomicOperations::compareExchangeSeqCst(T* addr, T oldval, T newval) { \ + JS::AutoSuppressGCAnalysis nogc; \ + return (T)cmpxchg((U*)addr, (U)oldval, (U)newval); \ + } + +namespace js { +namespace jit { + +JIT_CAS(int8_t, uint8_t, AtomicCmpXchg8SeqCst) +JIT_CAS(uint8_t, uint8_t, AtomicCmpXchg8SeqCst) +JIT_CAS(int16_t, uint16_t, AtomicCmpXchg16SeqCst) +JIT_CAS(uint16_t, uint16_t, AtomicCmpXchg16SeqCst) +JIT_CAS(int32_t, uint32_t, AtomicCmpXchg32SeqCst) +JIT_CAS(uint32_t, uint32_t, AtomicCmpXchg32SeqCst) +JIT_CAS(int64_t, uint64_t, AtomicCmpXchg64SeqCst) +JIT_CAS(uint64_t, uint64_t, AtomicCmpXchg64SeqCst) + +}} + +#undef JIT_CAS + +#define JIT_FETCHADDOP(T, U, xadd) \ + template<> inline T \ + AtomicOperations::fetchAddSeqCst(T* addr, T val) { \ + JS::AutoSuppressGCAnalysis nogc; \ + return (T)xadd((U*)addr, (U)val); \ + } \ + +#define JIT_FETCHSUBOP(T) \ + template<> inline T \ + AtomicOperations::fetchSubSeqCst(T* addr, T val) { \ + JS::AutoSuppressGCAnalysis nogc; \ + return fetchAddSeqCst(addr, (T)(0-val)); \ + } + +#ifndef JS_64BIT +# define JIT_FETCHADDOP_CAS(T) \ + template<> inline T \ + AtomicOperations::fetchAddSeqCst(T* addr, T val) { \ + JS::AutoSuppressGCAnalysis nogc; \ + AtomicCompilerFence(); \ + T oldval = *addr; /* Good initial approximation */ \ + for (;;) { \ + T nextval = (T)AtomicCmpXchg64SeqCst((uint64_t*)addr, \ + (uint64_t)oldval, \ + (uint64_t)(oldval + val)); \ + if (nextval == oldval) { \ + break; \ + } \ + oldval = nextval; \ + } \ + AtomicCompilerFence(); \ + return oldval; \ + } +#endif // !JS_64BIT + +namespace js { +namespace jit { + +JIT_FETCHADDOP(int8_t, uint8_t, AtomicAdd8SeqCst) +JIT_FETCHADDOP(uint8_t, uint8_t, AtomicAdd8SeqCst) +JIT_FETCHADDOP(int16_t, uint16_t, AtomicAdd16SeqCst) +JIT_FETCHADDOP(uint16_t, uint16_t, AtomicAdd16SeqCst) +JIT_FETCHADDOP(int32_t, uint32_t, AtomicAdd32SeqCst) +JIT_FETCHADDOP(uint32_t, uint32_t, AtomicAdd32SeqCst) + +#ifdef JIT_FETCHADDOP_CAS +JIT_FETCHADDOP_CAS(int64_t) +JIT_FETCHADDOP_CAS(uint64_t) +#else +JIT_FETCHADDOP(int64_t, uint64_t, AtomicAdd64SeqCst) +JIT_FETCHADDOP(uint64_t, uint64_t, AtomicAdd64SeqCst) +#endif + +JIT_FETCHSUBOP(int8_t) +JIT_FETCHSUBOP(uint8_t) +JIT_FETCHSUBOP(int16_t) +JIT_FETCHSUBOP(uint16_t) +JIT_FETCHSUBOP(int32_t) +JIT_FETCHSUBOP(uint32_t) +JIT_FETCHSUBOP(int64_t) +JIT_FETCHSUBOP(uint64_t) + +}} + +#undef JIT_FETCHADDOP +#undef JIT_FETCHADDOP_CAS +#undef JIT_FETCHSUBOP + +#define JIT_FETCHBITOPX(T, U, name, op) \ + template<> inline T \ + AtomicOperations::name(T* addr, T val) { \ + JS::AutoSuppressGCAnalysis nogc; \ + return (T)op((U *)addr, (U)val); \ + } + +#define JIT_FETCHBITOP(T, U, andop, orop, xorop) \ + JIT_FETCHBITOPX(T, U, fetchAndSeqCst, andop) \ + JIT_FETCHBITOPX(T, U, fetchOrSeqCst, orop) \ + JIT_FETCHBITOPX(T, U, fetchXorSeqCst, xorop) + +#ifndef JS_64BIT + +# define AND_OP & +# define OR_OP | +# define XOR_OP ^ + +# define JIT_FETCHBITOPX_CAS(T, name, OP) \ + template<> inline T \ + AtomicOperations::name(T* addr, T val) { \ + JS::AutoSuppressGCAnalysis nogc; \ + AtomicCompilerFence(); \ + T oldval = *addr; \ + for (;;) { \ + T nextval = (T)AtomicCmpXchg64SeqCst((uint64_t*)addr, \ + (uint64_t)oldval, \ + (uint64_t)(oldval OP val)); \ + if (nextval == oldval) { \ + break; \ + } \ + oldval = nextval; \ + } \ + AtomicCompilerFence(); \ + return oldval; \ + } + +# define JIT_FETCHBITOP_CAS(T) \ + JIT_FETCHBITOPX_CAS(T, fetchAndSeqCst, AND_OP) \ + JIT_FETCHBITOPX_CAS(T, fetchOrSeqCst, OR_OP) \ + JIT_FETCHBITOPX_CAS(T, fetchXorSeqCst, XOR_OP) + +#endif // !JS_64BIT + +namespace js { +namespace jit { + +JIT_FETCHBITOP(int8_t, uint8_t, AtomicAnd8SeqCst, AtomicOr8SeqCst, AtomicXor8SeqCst) +JIT_FETCHBITOP(uint8_t, uint8_t, AtomicAnd8SeqCst, AtomicOr8SeqCst, AtomicXor8SeqCst) +JIT_FETCHBITOP(int16_t, uint16_t, AtomicAnd16SeqCst, AtomicOr16SeqCst, AtomicXor16SeqCst) +JIT_FETCHBITOP(uint16_t, uint16_t, AtomicAnd16SeqCst, AtomicOr16SeqCst, AtomicXor16SeqCst) +JIT_FETCHBITOP(int32_t, uint32_t, AtomicAnd32SeqCst, AtomicOr32SeqCst, AtomicXor32SeqCst) +JIT_FETCHBITOP(uint32_t, uint32_t, AtomicAnd32SeqCst, AtomicOr32SeqCst, AtomicXor32SeqCst) + +#ifdef JIT_FETCHBITOP_CAS +JIT_FETCHBITOP_CAS(int64_t) +JIT_FETCHBITOP_CAS(uint64_t) +#else +JIT_FETCHBITOP(int64_t, uint64_t, AtomicAnd64SeqCst, AtomicOr64SeqCst, AtomicXor64SeqCst) +JIT_FETCHBITOP(uint64_t, uint64_t, AtomicAnd64SeqCst, AtomicOr64SeqCst, AtomicXor64SeqCst) +#endif + +}} + +#undef JIT_FETCHBITOPX_CAS +#undef JIT_FETCHBITOPX +#undef JIT_FETCHBITOP_CAS +#undef JIT_FETCHBITOP + +#define JIT_LOADSAFE(T, U, loadop) \ + template<> \ + inline T \ + js::jit::AtomicOperations::loadSafeWhenRacy(T* addr) { \ + JS::AutoSuppressGCAnalysis nogc; \ + union { U u; T t; }; \ + u = loadop((U*)addr); \ + return t; \ + } + +#ifndef JS_64BIT +# define JIT_LOADSAFE_TEARING(T) \ + template<> \ + inline T \ + js::jit::AtomicOperations::loadSafeWhenRacy(T* addr) { \ + JS::AutoSuppressGCAnalysis nogc; \ + MOZ_ASSERT(sizeof(T) == 8); \ + union { uint32_t u[2]; T t; }; \ + uint32_t* ptr = (uint32_t*)addr; \ + u[0] = AtomicLoad32Unsynchronized(ptr); \ + u[1] = AtomicLoad32Unsynchronized(ptr + 1); \ + return t; \ + } +#endif // !JS_64BIT + +namespace js { +namespace jit { + +JIT_LOADSAFE(int8_t, uint8_t, AtomicLoad8Unsynchronized) +JIT_LOADSAFE(uint8_t, uint8_t, AtomicLoad8Unsynchronized) +JIT_LOADSAFE(int16_t, uint16_t, AtomicLoad16Unsynchronized) +JIT_LOADSAFE(uint16_t, uint16_t, AtomicLoad16Unsynchronized) +JIT_LOADSAFE(int32_t, uint32_t, AtomicLoad32Unsynchronized) +JIT_LOADSAFE(uint32_t, uint32_t, AtomicLoad32Unsynchronized) +#ifdef JIT_LOADSAFE_TEARING +JIT_LOADSAFE_TEARING(int64_t) +JIT_LOADSAFE_TEARING(uint64_t) +JIT_LOADSAFE_TEARING(double) +#else +JIT_LOADSAFE(int64_t, uint64_t, AtomicLoad64Unsynchronized) +JIT_LOADSAFE(uint64_t, uint64_t, AtomicLoad64Unsynchronized) +JIT_LOADSAFE(double, uint64_t, AtomicLoad64Unsynchronized) +#endif +JIT_LOADSAFE(float, uint32_t, AtomicLoad32Unsynchronized) + +// Clang requires a specialization for uint8_clamped. +template<> +inline uint8_clamped js::jit::AtomicOperations::loadSafeWhenRacy( + uint8_clamped* addr) { + return uint8_clamped(loadSafeWhenRacy((uint8_t*)addr)); +} + +}} + +#undef JIT_LOADSAFE +#undef JIT_LOADSAFE_TEARING + +#define JIT_STORESAFE(T, U, storeop) \ + template<> \ + inline void \ + js::jit::AtomicOperations::storeSafeWhenRacy(T* addr, T val) { \ + JS::AutoSuppressGCAnalysis nogc; \ + union { U u; T t; }; \ + t = val; \ + storeop((U*)addr, u); \ + } + +#ifndef JS_64BIT +# define JIT_STORESAFE_TEARING(T) \ + template<> \ + inline void \ + js::jit::AtomicOperations::storeSafeWhenRacy(T* addr, T val) { \ + JS::AutoSuppressGCAnalysis nogc; \ + union { uint32_t u[2]; T t; }; \ + t = val; \ + uint32_t* ptr = (uint32_t*)addr; \ + AtomicStore32Unsynchronized(ptr, u[0]); \ + AtomicStore32Unsynchronized(ptr + 1, u[1]); \ + } +#endif // !JS_64BIT + +namespace js { +namespace jit { + +JIT_STORESAFE(int8_t, uint8_t, AtomicStore8Unsynchronized) +JIT_STORESAFE(uint8_t, uint8_t, AtomicStore8Unsynchronized) +JIT_STORESAFE(int16_t, uint16_t, AtomicStore16Unsynchronized) +JIT_STORESAFE(uint16_t, uint16_t, AtomicStore16Unsynchronized) +JIT_STORESAFE(int32_t, uint32_t, AtomicStore32Unsynchronized) +JIT_STORESAFE(uint32_t, uint32_t, AtomicStore32Unsynchronized) +#ifdef JIT_STORESAFE_TEARING +JIT_STORESAFE_TEARING(int64_t) +JIT_STORESAFE_TEARING(uint64_t) +JIT_STORESAFE_TEARING(double) +#else +JIT_STORESAFE(int64_t, uint64_t, AtomicStore64Unsynchronized) +JIT_STORESAFE(uint64_t, uint64_t, AtomicStore64Unsynchronized) +JIT_STORESAFE(double, uint64_t, AtomicStore64Unsynchronized) +#endif +JIT_STORESAFE(float, uint32_t, AtomicStore32Unsynchronized) + +// Clang requires a specialization for uint8_clamped. +template<> +inline void js::jit::AtomicOperations::storeSafeWhenRacy(uint8_clamped* addr, + uint8_clamped val) { + storeSafeWhenRacy((uint8_t*)addr, (uint8_t)val); +} + +}} + +#undef JIT_STORESAFE +#undef JIT_STORESAFE_TEARING + +void js::jit::AtomicOperations::memcpySafeWhenRacy(void* dest, const void* src, + size_t nbytes) { + JS::AutoSuppressGCAnalysis nogc; + MOZ_ASSERT(!((char*)dest <= (char*)src && (char*)src < (char*)dest+nbytes)); + MOZ_ASSERT(!((char*)src <= (char*)dest && (char*)dest < (char*)src+nbytes)); + AtomicMemcpyDownUnsynchronized((uint8_t*)dest, (const uint8_t*)src, nbytes); +} + +inline void js::jit::AtomicOperations::memmoveSafeWhenRacy(void* dest, + const void* src, + size_t nbytes) { + JS::AutoSuppressGCAnalysis nogc; + if ((char*)dest <= (char*)src) { + AtomicMemcpyDownUnsynchronized((uint8_t*)dest, (const uint8_t*)src, + nbytes); + } else { + AtomicMemcpyUpUnsynchronized((uint8_t*)dest, (const uint8_t*)src, + nbytes); + } +} + +namespace js { +namespace jit { + +extern bool InitializeJittedAtomics(); +extern void ShutDownJittedAtomics(); + +}} + +inline bool js::jit::AtomicOperations::Initialize() { + return InitializeJittedAtomics(); +} + +inline void js::jit::AtomicOperations::ShutDown() { + ShutDownJittedAtomics(); +} + +#endif // jit_shared_AtomicOperations_shared_jit_h diff --git a/js/src/jit/x64/MacroAssembler-x64.cpp b/js/src/jit/x64/MacroAssembler-x64.cpp index e765e08631ba6..73abbc2a36a20 100644 --- a/js/src/jit/x64/MacroAssembler-x64.cpp +++ b/js/src/jit/x64/MacroAssembler-x64.cpp @@ -930,27 +930,33 @@ void MacroAssembler::wasmAtomicExchange64(const wasm::MemoryAccessDesc& access, } template -static void WasmAtomicFetchOp64(MacroAssembler& masm, - const wasm::MemoryAccessDesc access, - AtomicOp op, Register value, const T& mem, - Register temp, Register output) { +static void AtomicFetchOp64(MacroAssembler& masm, + const wasm::MemoryAccessDesc* access, AtomicOp op, + Register value, const T& mem, Register temp, + Register output) { if (op == AtomicFetchAddOp) { if (value != output) { masm.movq(value, output); } - masm.append(access, masm.size()); + if (access) { + masm.append(*access, masm.size()); + } masm.lock_xaddq(output, Operand(mem)); } else if (op == AtomicFetchSubOp) { if (value != output) { masm.movq(value, output); } masm.negq(output); - masm.append(access, masm.size()); + if (access) { + masm.append(*access, masm.size()); + } masm.lock_xaddq(output, Operand(mem)); } else { Label again; MOZ_ASSERT(output == rax); - masm.append(access, masm.size()); + if (access) { + masm.append(*access, masm.size()); + } masm.movq(Operand(mem), rax); masm.bind(&again); masm.movq(rax, temp); @@ -976,14 +982,14 @@ void MacroAssembler::wasmAtomicFetchOp64(const wasm::MemoryAccessDesc& access, AtomicOp op, Register64 value, const Address& mem, Register64 temp, Register64 output) { - WasmAtomicFetchOp64(*this, access, op, value.reg, mem, temp.reg, output.reg); + AtomicFetchOp64(*this, &access, op, value.reg, mem, temp.reg, output.reg); } void MacroAssembler::wasmAtomicFetchOp64(const wasm::MemoryAccessDesc& access, AtomicOp op, Register64 value, const BaseIndex& mem, Register64 temp, Register64 output) { - WasmAtomicFetchOp64(*this, access, op, value.reg, mem, temp.reg, output.reg); + AtomicFetchOp64(*this, &access, op, value.reg, mem, temp.reg, output.reg); } void MacroAssembler::wasmAtomicEffectOp64(const wasm::MemoryAccessDesc& access, @@ -1011,4 +1017,30 @@ void MacroAssembler::wasmAtomicEffectOp64(const wasm::MemoryAccessDesc& access, } } +void MacroAssembler::compareExchange64(const Synchronization&, + const Address& mem, Register64 expected, + Register64 replacement, + Register64 output) { + MOZ_ASSERT(output.reg == rax); + if (expected != output) { + movq(expected.reg, output.reg); + } + lock_cmpxchgq(replacement.reg, Operand(mem)); +} + +void MacroAssembler::atomicExchange64(const Synchronization&, + const Address& mem, Register64 value, + Register64 output) { + if (value != output) { + movq(value.reg, output.reg); + } + xchgq(output.reg, Operand(mem)); +} + +void MacroAssembler::atomicFetchOp64(const Synchronization& sync, AtomicOp op, + Register64 value, const Address& mem, + Register64 temp, Register64 output) { + AtomicFetchOp64(*this, nullptr, op, value.reg, mem, temp.reg, output.reg); +} + //}}} check_macroassembler_style diff --git a/js/src/jit/x86-shared/Assembler-x86-shared.h b/js/src/jit/x86-shared/Assembler-x86-shared.h index 15a35d7ac15df..b9c5d3f3bc7c3 100644 --- a/js/src/jit/x86-shared/Assembler-x86-shared.h +++ b/js/src/jit/x86-shared/Assembler-x86-shared.h @@ -209,6 +209,19 @@ class CPUInfo { static void SetSSEVersion(); + // The flags can become set at startup when we JIT non-JS code eagerly; thus + // we reset the flags before setting any flags explicitly during testing, so + // that the flags can be in a consistent state. + + static void reset() { + maxSSEVersion = UnknownSSE; + maxEnabledSSEVersion = UnknownSSE; + avxPresent = false; + avxEnabled = false; + popcntPresent = false; + needAmdBugWorkaround = false; + } + public: static bool IsSSE2Present() { #ifdef JS_CODEGEN_X64 @@ -228,14 +241,19 @@ class CPUInfo { static bool NeedAmdBugWorkaround() { return needAmdBugWorkaround; } static void SetSSE3Disabled() { + reset(); maxEnabledSSEVersion = SSE2; avxEnabled = false; } static void SetSSE4Disabled() { + reset(); maxEnabledSSEVersion = SSSE3; avxEnabled = false; } - static void SetAVXEnabled() { avxEnabled = true; } + static void SetAVXEnabled() { + reset(); + avxEnabled = true; + } }; class AssemblerX86Shared : public AssemblerShared { diff --git a/js/src/jit/x86-shared/AtomicOperations-x86-shared-gcc.h b/js/src/jit/x86-shared/AtomicOperations-x86-shared-gcc.h index ddf8c61a7eb13..9ed4975169a68 100644 --- a/js/src/jit/x86-shared/AtomicOperations-x86-shared-gcc.h +++ b/js/src/jit/x86-shared/AtomicOperations-x86-shared-gcc.h @@ -58,6 +58,15 @@ // For now, we require that the C++ compiler's atomics are lock free, even for // 64-bit accesses. +inline bool js::jit::AtomicOperations::Initialize() { + // Nothing + return true; +} + +inline void js::jit::AtomicOperations::ShutDown() { + // Nothing +} + // When compiling with Clang on 32-bit linux it will be necessary to link with // -latomic to get the proper 64-bit intrinsics. diff --git a/js/src/jit/x86-shared/AtomicOperations-x86-shared-msvc.h b/js/src/jit/x86-shared/AtomicOperations-x86-shared-msvc.h index c0b5a0f0a50c4..a6ac141fc2c6b 100644 --- a/js/src/jit/x86-shared/AtomicOperations-x86-shared-msvc.h +++ b/js/src/jit/x86-shared/AtomicOperations-x86-shared-msvc.h @@ -37,6 +37,15 @@ // Note, _InterlockedCompareExchange takes the *new* value as the second // argument and the *comparand* (expected old value) as the third argument. +inline bool js::jit::AtomicOperations::Initialize() { + // Nothing + return true; +} + +inline void js::jit::AtomicOperations::ShutDown() { + // Nothing +} + inline bool js::jit::AtomicOperations::hasAtomic8() { return true; } inline bool js::jit::AtomicOperations::isLockfree8() { diff --git a/js/src/vm/Initialization.cpp b/js/src/vm/Initialization.cpp index 0067fde48885a..ce7aa24305864 100644 --- a/js/src/vm/Initialization.cpp +++ b/js/src/vm/Initialization.cpp @@ -17,6 +17,7 @@ #include "builtin/AtomicsObject.h" #include "ds/MemoryProtectionExceptionHandler.h" #include "gc/Statistics.h" +#include "jit/AtomicOperations.h" #include "jit/ExecutableAllocator.h" #include "jit/Ion.h" #include "jit/JitCommon.h" @@ -127,6 +128,8 @@ JS_PUBLIC_API const char* JS::detail::InitWithFailureDiagnostic( RETURN_IF_FAIL(js::vtune::Initialize()); #endif + RETURN_IF_FAIL(js::jit::AtomicOperations::Initialize()); + #if EXPOSE_INTL_API UErrorCode err = U_ZERO_ERROR; u_init(&err); @@ -175,6 +178,8 @@ JS_PUBLIC_API void JS_ShutDown(void) { js::jit::SimulatorProcess::destroy(); #endif + js::jit::AtomicOperations::ShutDown(); + #ifdef JS_TRACE_LOGGING js::DestroyTraceLoggerThreadState(); js::DestroyTraceLoggerGraphState(); From a72c0b4a9de3c9eb880414cfc00a6addfc40a7c8 Mon Sep 17 00:00:00 2001 From: Lars T Hansen Date: Thu, 11 Oct 2018 14:54:25 +0200 Subject: [PATCH 2/9] Bug 1394420 - Consolidate feeling-lucky atomics. r=froydnj With jitted primitives for racy atomic access in place, we can consolidate most C++ realizations of the atomic primitives into two headers, one for gcc/Clang and one for MSVC, that will be used as default fallbacks on non-tier-1 platforms. Non-tier-1 platforms can still implement their own atomics layer, as does MIPS already; we leave the MIPS code alone here. --HG-- rename : js/src/jit/none/AtomicOperations-feeling-lucky.h => js/src/jit/shared/AtomicOperations-feeling-lucky-gcc.h rename : js/src/jit/x86-shared/AtomicOperations-x86-shared-msvc.h => js/src/jit/shared/AtomicOperations-feeling-lucky-msvc.h extra : rebase_source : 2d4cb5cb3be09b1e8cb6eb5df65d65e0c61cff0b --- js/src/jit/AtomicOperations.h | 60 +-- js/src/jit/arm/AtomicOperations-arm.h | 230 ----------- js/src/jit/arm64/AtomicOperations-arm64-gcc.h | 161 -------- .../AtomicOperations-feeling-lucky-gcc.h} | 135 ++++--- .../AtomicOperations-feeling-lucky-msvc.h} | 45 +-- .../shared/AtomicOperations-feeling-lucky.h | 19 + .../AtomicOperations-x86-shared-gcc.h | 244 ------------ .../AtomicOperations-x86-shared-msvc.h | 376 ------------------ 8 files changed, 121 insertions(+), 1149 deletions(-) delete mode 100644 js/src/jit/arm/AtomicOperations-arm.h delete mode 100644 js/src/jit/arm64/AtomicOperations-arm64-gcc.h rename js/src/jit/{none/AtomicOperations-feeling-lucky.h => shared/AtomicOperations-feeling-lucky-gcc.h} (85%) rename js/src/jit/{arm64/AtomicOperations-arm64-msvc.h => shared/AtomicOperations-feeling-lucky-msvc.h} (90%) create mode 100644 js/src/jit/shared/AtomicOperations-feeling-lucky.h delete mode 100644 js/src/jit/x86-shared/AtomicOperations-x86-shared-gcc.h delete mode 100644 js/src/jit/x86-shared/AtomicOperations-x86-shared-msvc.h diff --git a/js/src/jit/AtomicOperations.h b/js/src/jit/AtomicOperations.h index c70340453b49f..ad88c238cc121 100644 --- a/js/src/jit/AtomicOperations.h +++ b/js/src/jit/AtomicOperations.h @@ -279,24 +279,6 @@ class AtomicOperations { size_t nelem) { memmoveSafeWhenRacy(dest, src, nelem * sizeof(T)); } - -#ifdef DEBUG - // Constraints that must hold for atomic operations on all tier-1 platforms: - // - // - atomic cells can be 1, 2, 4, or 8 bytes - // - all atomic operations are lock-free, including 8-byte operations - // - atomic operations can only be performed on naturally aligned cells - // - // (Tier-2 and tier-3 platforms need not support 8-byte atomics, and if they - // do, they need not be lock-free.) - - template - static bool tier1Constraints(const T* addr) { - static_assert(sizeof(T) <= 8, "atomics supported up to 8 bytes only"); - return (sizeof(T) < 8 || (hasAtomic8() && isLockfree8())) && - !(uintptr_t(addr) & (sizeof(T) - 1)); - } -#endif }; inline bool AtomicOperations::isLockfreeJS(int32_t size) { @@ -340,7 +322,7 @@ inline bool AtomicOperations::isLockfreeJS(int32_t size) { // - write your own support code for the platform+compiler and create a new // case below // -// - include jit/none/AtomicOperations-feeling-lucky.h in a case for the +// - include jit/shared/AtomicOperations-feeling-lucky.h in a case for the // platform below, if you have a gcc-compatible compiler and truly feel // lucky. You may have to add a little code to that file, too. // @@ -358,58 +340,38 @@ inline bool AtomicOperations::isLockfreeJS(int32_t size) { # if defined(__clang__) || defined(__GNUC__) # include "jit/mips-shared/AtomicOperations-mips-shared.h" # else -# error "No AtomicOperations support for this platform+compiler combination" +# error "AtomicOperations on MIPS-32 for unknown compiler" # endif #elif defined(__x86_64__) || defined(_M_X64) || defined(__i386__) || \ defined(_M_IX86) # if defined(JS_CODEGEN_X86) || defined(JS_CODEGEN_X64) # include "jit/shared/AtomicOperations-shared-jit.h" -# elif defined(__clang__) || defined(__GNUC__) -# include "jit/x86-shared/AtomicOperations-x86-shared-gcc.h" -# elif defined(_MSC_VER) -# include "jit/x86-shared/AtomicOperations-x86-shared-msvc.h" # else -# error "No AtomicOperations support for this platform+compiler combination" +# include "jit/shared/AtomicOperations-feeling-lucky.h" # endif #elif defined(__arm__) # if defined(JS_CODEGEN_ARM) # include "jit/shared/AtomicOperations-shared-jit.h" -# elif defined(__clang__) || defined(__GNUC__) -# include "jit/arm/AtomicOperations-arm.h" # else -# error "No AtomicOperations support for this platform+compiler combination" +# include "jit/shared/AtomicOperations-feeling-lucky.h" # endif #elif defined(__aarch64__) || defined(_M_ARM64) # if defined(JS_CODEGEN_ARM64) # include "jit/shared/AtomicOperations-shared-jit.h" -# elif defined(__clang__) || defined(__GNUC__) -# include "jit/arm64/AtomicOperations-arm64-gcc.h" -# elif defined(_MSC_VER) -# include "jit/arm64/AtomicOperations-arm64-msvc.h" # else -# error "No AtomicOperations support for this platform+compiler combination" +# include "jit/shared/AtomicOperations-feeling-lucky.h" # endif #elif defined(__mips__) # if defined(__clang__) || defined(__GNUC__) # include "jit/mips-shared/AtomicOperations-mips-shared.h" # else -# error "No AtomicOperations support for this platform+compiler combination" +# error "AtomicOperations on MIPS for an unknown compiler" # endif -#elif defined(__ppc__) || defined(__PPC__) -# include "jit/none/AtomicOperations-feeling-lucky.h" -#elif defined(__sparc__) -# include "jit/none/AtomicOperations-feeling-lucky.h" -#elif defined(__ppc64__) || defined(__PPC64__) || defined(__ppc64le__) || \ - defined(__PPC64LE__) -# include "jit/none/AtomicOperations-feeling-lucky.h" -#elif defined(__alpha__) -# include "jit/none/AtomicOperations-feeling-lucky.h" -#elif defined(__hppa__) -# include "jit/none/AtomicOperations-feeling-lucky.h" -#elif defined(__sh__) -# include "jit/none/AtomicOperations-feeling-lucky.h" -#elif defined(__s390__) || defined(__s390x__) -# include "jit/none/AtomicOperations-feeling-lucky.h" +#elif defined(__ppc__) || defined(__PPC__) || defined(__sparc__) || \ + defined(__ppc64__) || defined(__PPC64__) || defined(__ppc64le__) || \ + defined(__PPC64LE__) || defined(__alpha__) || defined(__hppa__) || \ + defined(__sh__) || defined(__s390__) || defined(__s390x__) +# include "jit/shared/AtomicOperations-feeling-lucky.h" #else # error "No AtomicOperations support provided for this platform" #endif diff --git a/js/src/jit/arm/AtomicOperations-arm.h b/js/src/jit/arm/AtomicOperations-arm.h deleted file mode 100644 index 403079d5283a1..0000000000000 --- a/js/src/jit/arm/AtomicOperations-arm.h +++ /dev/null @@ -1,230 +0,0 @@ -/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- - * vim: set ts=8 sts=2 et sw=2 tw=80: - * This Source Code Form is subject to the terms of the Mozilla Public - * License, v. 2.0. If a copy of the MPL was not distributed with this - * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ - -#ifndef jit_arm_AtomicOperations_arm_h -#define jit_arm_AtomicOperations_arm_h - -#include "jit/arm/Architecture-arm.h" - -#include "vm/ArrayBufferObject.h" - -// For documentation, see jit/AtomicOperations.h - -// NOTE, this file is *not* used with the ARM simulator, only when compiling for -// actual ARM hardware. The simulators get the files that are appropriate for -// the hardware the simulator is running on. See the comments before the -// #include nest at the bottom of jit/AtomicOperations.h for more information. - -// Firefox requires gcc > 4.8, so we will always have the __atomic intrinsics -// added for use in C++11 . -// -// Note that using these intrinsics for most operations is not correct: the code -// has undefined behavior. The gcc documentation states that the compiler -// assumes the code is race free. This supposedly means C++ will allow some -// instruction reorderings (effectively those allowed by TSO) even for seq_cst -// ordered operations, but these reorderings are not allowed by JS. To do -// better we will end up with inline assembler or JIT-generated code. - -#if !defined(__clang__) && !defined(__GNUC__) -# error "This file only for gcc-compatible compilers" -#endif - -inline bool js::jit::AtomicOperations::Initialize() { - // Nothing - return true; -} - -inline void js::jit::AtomicOperations::ShutDown() { - // Nothing -} - -inline bool js::jit::AtomicOperations::hasAtomic8() { - // This guard is really only for tier-2 and tier-3 systems: LDREXD and - // STREXD have been available since ARMv6K, and only ARMv7 and later are - // tier-1. - return HasLDSTREXBHD(); -} - -inline bool js::jit::AtomicOperations::isLockfree8() { - // The JIT and the C++ compiler must agree on whether to use atomics - // for 64-bit accesses. There are two ways to do this: either the - // JIT defers to the C++ compiler (so if the C++ code is compiled - // for ARMv6, say, and __atomic_always_lock_free(8) is false, then the - // JIT ignores the fact that the program is running on ARMv7 or newer); - // or the C++ code in this file calls out to run-time generated code - // to do whatever the JIT does. - // - // For now, make the JIT defer to the C++ compiler when we know what - // the C++ compiler will do, otherwise assume a lock is needed. - MOZ_ASSERT(__atomic_always_lock_free(sizeof(int8_t), 0)); - MOZ_ASSERT(__atomic_always_lock_free(sizeof(int16_t), 0)); - MOZ_ASSERT(__atomic_always_lock_free(sizeof(int32_t), 0)); - - return hasAtomic8() && __atomic_always_lock_free(sizeof(int64_t), 0); -} - -inline void js::jit::AtomicOperations::fenceSeqCst() { - __atomic_thread_fence(__ATOMIC_SEQ_CST); -} - -template -inline T js::jit::AtomicOperations::loadSeqCst(T* addr) { - MOZ_ASSERT(tier1Constraints(addr)); - T v; - __atomic_load(addr, &v, __ATOMIC_SEQ_CST); - return v; -} - -template -inline void js::jit::AtomicOperations::storeSeqCst(T* addr, T val) { - MOZ_ASSERT(tier1Constraints(addr)); - __atomic_store(addr, &val, __ATOMIC_SEQ_CST); -} - -template -inline T js::jit::AtomicOperations::exchangeSeqCst(T* addr, T val) { - MOZ_ASSERT(tier1Constraints(addr)); - T v; - __atomic_exchange(addr, &val, &v, __ATOMIC_SEQ_CST); - return v; -} - -template -inline T js::jit::AtomicOperations::compareExchangeSeqCst(T* addr, T oldval, - T newval) { - MOZ_ASSERT(tier1Constraints(addr)); - __atomic_compare_exchange(addr, &oldval, &newval, false, __ATOMIC_SEQ_CST, - __ATOMIC_SEQ_CST); - return oldval; -} - -template -inline T js::jit::AtomicOperations::fetchAddSeqCst(T* addr, T val) { - MOZ_ASSERT(tier1Constraints(addr)); - return __atomic_fetch_add(addr, val, __ATOMIC_SEQ_CST); -} - -template -inline T js::jit::AtomicOperations::fetchSubSeqCst(T* addr, T val) { - MOZ_ASSERT(tier1Constraints(addr)); - return __atomic_fetch_sub(addr, val, __ATOMIC_SEQ_CST); -} - -template -inline T js::jit::AtomicOperations::fetchAndSeqCst(T* addr, T val) { - MOZ_ASSERT(tier1Constraints(addr)); - return __atomic_fetch_and(addr, val, __ATOMIC_SEQ_CST); -} - -template -inline T js::jit::AtomicOperations::fetchOrSeqCst(T* addr, T val) { - MOZ_ASSERT(tier1Constraints(addr)); - return __atomic_fetch_or(addr, val, __ATOMIC_SEQ_CST); -} - -template -inline T js::jit::AtomicOperations::fetchXorSeqCst(T* addr, T val) { - MOZ_ASSERT(tier1Constraints(addr)); - return __atomic_fetch_xor(addr, val, __ATOMIC_SEQ_CST); -} - -template -inline T js::jit::AtomicOperations::loadSafeWhenRacy(T* addr) { - MOZ_ASSERT(tier1Constraints(addr)); - T v; - __atomic_load(addr, &v, __ATOMIC_RELAXED); - return v; -} - -namespace js { -namespace jit { - -#define GCC_RACYLOADOP(T) \ - template <> \ - inline T js::jit::AtomicOperations::loadSafeWhenRacy(T* addr) { \ - return *addr; \ - } - -// On 32-bit platforms, loadSafeWhenRacy need not be access-atomic for 64-bit -// data, so just use regular accesses instead of the expensive __atomic_load -// solution which must use LDREXD/CLREX. -#ifndef JS_64BIT -GCC_RACYLOADOP(int64_t) -GCC_RACYLOADOP(uint64_t) -#endif - -// Float and double accesses are not access-atomic. -GCC_RACYLOADOP(float) -GCC_RACYLOADOP(double) - -// Clang requires a specialization for uint8_clamped. -template <> -inline uint8_clamped js::jit::AtomicOperations::loadSafeWhenRacy( - uint8_clamped* addr) { - uint8_t v; - __atomic_load(&addr->val, &v, __ATOMIC_RELAXED); - return uint8_clamped(v); -} - -#undef GCC_RACYLOADOP - -} // namespace jit -} // namespace js - -template -inline void js::jit::AtomicOperations::storeSafeWhenRacy(T* addr, T val) { - MOZ_ASSERT(tier1Constraints(addr)); - __atomic_store(addr, &val, __ATOMIC_RELAXED); -} - -namespace js { -namespace jit { - -#define GCC_RACYSTOREOP(T) \ - template <> \ - inline void js::jit::AtomicOperations::storeSafeWhenRacy(T* addr, T val) { \ - *addr = val; \ - } - -// On 32-bit platforms, storeSafeWhenRacy need not be access-atomic for 64-bit -// data, so just use regular accesses instead of the expensive __atomic_store -// solution which must use LDREXD/STREXD. -#ifndef JS_64BIT -GCC_RACYSTOREOP(int64_t) -GCC_RACYSTOREOP(uint64_t) -#endif - -// Float and double accesses are not access-atomic. -GCC_RACYSTOREOP(float) -GCC_RACYSTOREOP(double) - -// Clang requires a specialization for uint8_clamped. -template <> -inline void js::jit::AtomicOperations::storeSafeWhenRacy(uint8_clamped* addr, - uint8_clamped val) { - __atomic_store(&addr->val, &val.val, __ATOMIC_RELAXED); -} - -#undef GCC_RACYSTOREOP - -} // namespace jit -} // namespace js - -inline void js::jit::AtomicOperations::memcpySafeWhenRacy(void* dest, - const void* src, - size_t nbytes) { - MOZ_ASSERT(!((char*)dest <= (char*)src && (char*)src < (char*)dest + nbytes)); - MOZ_ASSERT(!((char*)src <= (char*)dest && (char*)dest < (char*)src + nbytes)); - memcpy(dest, src, nbytes); -} - -inline void js::jit::AtomicOperations::memmoveSafeWhenRacy(void* dest, - const void* src, - size_t nbytes) { - memmove(dest, src, nbytes); -} - -#endif // jit_arm_AtomicOperations_arm_h diff --git a/js/src/jit/arm64/AtomicOperations-arm64-gcc.h b/js/src/jit/arm64/AtomicOperations-arm64-gcc.h deleted file mode 100644 index 5e406a5369557..0000000000000 --- a/js/src/jit/arm64/AtomicOperations-arm64-gcc.h +++ /dev/null @@ -1,161 +0,0 @@ -/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- - * vim: set ts=8 sts=2 et sw=2 tw=80: - * This Source Code Form is subject to the terms of the Mozilla Public - * License, v. 2.0. If a copy of the MPL was not distributed with this - * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ - -/* For documentation, see jit/AtomicOperations.h */ - -#ifndef jit_arm64_AtomicOperations_arm64_h -#define jit_arm64_AtomicOperations_arm64_h - -#include "mozilla/Assertions.h" -#include "mozilla/Types.h" - -#include "vm/ArrayBufferObject.h" - -#if !defined(__clang__) && !defined(__GNUC__) -# error "This file only for gcc-compatible compilers" -#endif - -inline bool js::jit::AtomicOperations::Initialize() { - // Nothing - return true; -} - -inline void js::jit::AtomicOperations::ShutDown() { - // Nothing -} - -inline bool js::jit::AtomicOperations::hasAtomic8() { return true; } - -inline bool js::jit::AtomicOperations::isLockfree8() { - MOZ_ASSERT(__atomic_always_lock_free(sizeof(int8_t), 0)); - MOZ_ASSERT(__atomic_always_lock_free(sizeof(int16_t), 0)); - MOZ_ASSERT(__atomic_always_lock_free(sizeof(int32_t), 0)); - MOZ_ASSERT(__atomic_always_lock_free(sizeof(int64_t), 0)); - return true; -} - -inline void js::jit::AtomicOperations::fenceSeqCst() { - __atomic_thread_fence(__ATOMIC_SEQ_CST); -} - -template -inline T js::jit::AtomicOperations::loadSeqCst(T* addr) { - MOZ_ASSERT(tier1Constraints(addr)); - T v; - __atomic_load(addr, &v, __ATOMIC_SEQ_CST); - return v; -} - -template -inline void js::jit::AtomicOperations::storeSeqCst(T* addr, T val) { - MOZ_ASSERT(tier1Constraints(addr)); - __atomic_store(addr, &val, __ATOMIC_SEQ_CST); -} - -template -inline T js::jit::AtomicOperations::exchangeSeqCst(T* addr, T val) { - MOZ_ASSERT(tier1Constraints(addr)); - T v; - __atomic_exchange(addr, &val, &v, __ATOMIC_SEQ_CST); - return v; -} - -template -inline T js::jit::AtomicOperations::compareExchangeSeqCst(T* addr, T oldval, - T newval) { - MOZ_ASSERT(tier1Constraints(addr)); - __atomic_compare_exchange(addr, &oldval, &newval, false, __ATOMIC_SEQ_CST, - __ATOMIC_SEQ_CST); - return oldval; -} - -template -inline T js::jit::AtomicOperations::fetchAddSeqCst(T* addr, T val) { - MOZ_ASSERT(tier1Constraints(addr)); - return __atomic_fetch_add(addr, val, __ATOMIC_SEQ_CST); -} - -template -inline T js::jit::AtomicOperations::fetchSubSeqCst(T* addr, T val) { - MOZ_ASSERT(tier1Constraints(addr)); - return __atomic_fetch_sub(addr, val, __ATOMIC_SEQ_CST); -} - -template -inline T js::jit::AtomicOperations::fetchAndSeqCst(T* addr, T val) { - MOZ_ASSERT(tier1Constraints(addr)); - return __atomic_fetch_and(addr, val, __ATOMIC_SEQ_CST); -} - -template -inline T js::jit::AtomicOperations::fetchOrSeqCst(T* addr, T val) { - MOZ_ASSERT(tier1Constraints(addr)); - return __atomic_fetch_or(addr, val, __ATOMIC_SEQ_CST); -} - -template -inline T js::jit::AtomicOperations::fetchXorSeqCst(T* addr, T val) { - MOZ_ASSERT(tier1Constraints(addr)); - return __atomic_fetch_xor(addr, val, __ATOMIC_SEQ_CST); -} - -template -inline T js::jit::AtomicOperations::loadSafeWhenRacy(T* addr) { - MOZ_ASSERT(tier1Constraints(addr)); - T v; - __atomic_load(addr, &v, __ATOMIC_RELAXED); - return v; -} - -namespace js { -namespace jit { - -// Clang requires a specialization for uint8_clamped. -template <> -inline js::uint8_clamped js::jit::AtomicOperations::loadSafeWhenRacy( - js::uint8_clamped* addr) { - uint8_t v; - __atomic_load(&addr->val, &v, __ATOMIC_RELAXED); - return js::uint8_clamped(v); -} - -} // namespace jit -} // namespace js - -template -inline void js::jit::AtomicOperations::storeSafeWhenRacy(T* addr, T val) { - MOZ_ASSERT(tier1Constraints(addr)); - __atomic_store(addr, &val, __ATOMIC_RELAXED); -} - -namespace js { -namespace jit { - -// Clang requires a specialization for uint8_clamped. -template <> -inline void js::jit::AtomicOperations::storeSafeWhenRacy( - js::uint8_clamped* addr, js::uint8_clamped val) { - __atomic_store(&addr->val, &val.val, __ATOMIC_RELAXED); -} - -} // namespace jit -} // namespace js - -inline void js::jit::AtomicOperations::memcpySafeWhenRacy(void* dest, - const void* src, - size_t nbytes) { - MOZ_ASSERT(!((char*)dest <= (char*)src && (char*)src < (char*)dest + nbytes)); - MOZ_ASSERT(!((char*)src <= (char*)dest && (char*)dest < (char*)src + nbytes)); - memcpy(dest, src, nbytes); -} - -inline void js::jit::AtomicOperations::memmoveSafeWhenRacy(void* dest, - const void* src, - size_t nbytes) { - memmove(dest, src, nbytes); -} - -#endif // jit_arm64_AtomicOperations_arm64_h diff --git a/js/src/jit/none/AtomicOperations-feeling-lucky.h b/js/src/jit/shared/AtomicOperations-feeling-lucky-gcc.h similarity index 85% rename from js/src/jit/none/AtomicOperations-feeling-lucky.h rename to js/src/jit/shared/AtomicOperations-feeling-lucky-gcc.h index 89852734d7d31..3270785f2a022 100644 --- a/js/src/jit/none/AtomicOperations-feeling-lucky.h +++ b/js/src/jit/shared/AtomicOperations-feeling-lucky-gcc.h @@ -7,10 +7,10 @@ /* For documentation, see jit/AtomicOperations.h, both the comment block at the * beginning and the #ifdef nest near the end. * - * This is a common file for tier-3 platforms that are not providing - * hardware-specific implementations of the atomic operations. Please keep it - * reasonably platform-independent by adding #ifdefs at the beginning as much as - * possible, not throughout the file. + * This is a common file for tier-3 platforms (including simulators for our + * tier-1 platforms) that are not providing hardware-specific implementations of + * the atomic operations. Please keep it reasonably platform-independent by + * adding #ifdefs at the beginning as much as possible, not throughout the file. * * * !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! @@ -22,12 +22,25 @@ * frequently good enough for tier-3 platforms. */ -#ifndef jit_none_AtomicOperations_feeling_lucky_h -#define jit_none_AtomicOperations_feeling_lucky_h +#ifndef jit_shared_AtomicOperations_feeling_lucky_gcc_h +#define jit_shared_AtomicOperations_feeling_lucky_gcc_h #include "mozilla/Assertions.h" #include "mozilla/Types.h" +// Explicitly exclude tier-1 platforms. + +#if ((defined(__x86_64__) || defined(_M_X64)) && defined(JS_CODEGEN_X64)) || \ + ((defined(__i386__) || defined(_M_IX86)) && defined(JS_CODEGEN_X86)) || \ + (defined(__arm__) && defined(JS_CODEGEN_ARM)) || \ + ((defined(__aarch64__) || defined(_M_ARM64)) && defined(JS_CODEGEN_ARM64)) +# error "Do not use this code on a tier-1 platform when a JIT is available" +#endif + +#if !(defined(__clang__) || defined(__GNUC__)) +# error "This file only for gcc/Clang" +#endif + // 64-bit atomics are not required by the JS spec, and you can compile // SpiderMonkey without them. // @@ -74,21 +87,15 @@ # define GNUC_COMPATIBLE #endif -#ifdef __s390x__ -# define HAS_64BIT_ATOMICS -# define HAS_64BIT_LOCKFREE -# define GNUC_COMPATIBLE -#endif - -// The default implementation tactic for gcc/clang is to use the newer -// __atomic intrinsics added for use in C++11 . Where that -// isn't available, we use GCC's older __sync functions instead. +// The default implementation tactic for gcc/clang is to use the newer __atomic +// intrinsics added for use in C++11 . Where that isn't available, we +// use GCC's older __sync functions instead. // -// ATOMICS_IMPLEMENTED_WITH_SYNC_INTRINSICS is kept as a backward -// compatible option for older compilers: enable this to use GCC's old -// __sync functions instead of the newer __atomic functions. This -// will be required for GCC 4.6.x and earlier, and probably for Clang -// 3.1, should we need to use those versions. +// ATOMICS_IMPLEMENTED_WITH_SYNC_INTRINSICS is kept as a backward compatible +// option for older compilers: enable this to use GCC's old __sync functions +// instead of the newer __atomic functions. This will be required for GCC 4.6.x +// and earlier, and probably for Clang 3.1, should we need to use those +// versions. Firefox no longer supports compilers that old. //#define ATOMICS_IMPLEMENTED_WITH_SYNC_INTRINSICS @@ -109,7 +116,8 @@ inline void js::jit::AtomicOperations::ShutDown() { // Nothing } -#ifdef GNUC_COMPATIBLE +// When compiling with Clang on 32-bit linux it will be necessary to link with +// -latomic to get the proper 64-bit intrinsics. inline bool js::jit::AtomicOperations::hasAtomic8() { # if defined(HAS_64BIT_ATOMICS) @@ -197,6 +205,41 @@ inline void AtomicOperations::storeSeqCst(uint64_t* addr, uint64_t val) { } // namespace js # endif +template +inline T js::jit::AtomicOperations::exchangeSeqCst(T* addr, T val) { + static_assert(sizeof(T) <= 8, "atomics supported up to 8 bytes only"); +# ifdef ATOMICS_IMPLEMENTED_WITH_SYNC_INTRINSICS + T v; + __sync_synchronize(); + do { + v = *addr; + } while (__sync_val_compare_and_swap(addr, v, val) != v); + return v; +# else + T v; + __atomic_exchange(addr, &val, &v, __ATOMIC_SEQ_CST); + return v; +# endif +} + +# ifndef HAS_64BIT_ATOMICS +namespace js { +namespace jit { + +template <> +inline int64_t AtomicOperations::exchangeSeqCst(int64_t* addr, int64_t val) { + MOZ_CRASH("No 64-bit atomics"); +} + +template <> +inline uint64_t AtomicOperations::exchangeSeqCst(uint64_t* addr, uint64_t val) { + MOZ_CRASH("No 64-bit atomics"); +} + +} // namespace jit +} // namespace js +# endif + template inline T js::jit::AtomicOperations::compareExchangeSeqCst(T* addr, T oldval, T newval) { @@ -377,6 +420,9 @@ inline T js::jit::AtomicOperations::loadSafeWhenRacy(T* addr) { static_assert(sizeof(T) <= 8, "atomics supported up to 8 bytes only"); // This is actually roughly right even on 32-bit platforms since in that // case, double, int64, and uint64 loads need not be access-atomic. + // + // We could use __atomic_load, but it would be needlessly expensive on + // 32-bit platforms that could support it and just plain wrong on others. return *addr; } @@ -385,6 +431,9 @@ inline void js::jit::AtomicOperations::storeSafeWhenRacy(T* addr, T val) { static_assert(sizeof(T) <= 8, "atomics supported up to 8 bytes only"); // This is actually roughly right even on 32-bit platforms since in that // case, double, int64, and uint64 loads need not be access-atomic. + // + // We could use __atomic_store, but it would be needlessly expensive on + // 32-bit platforms that could support it and just plain wrong on others. *addr = val; } @@ -402,50 +451,8 @@ inline void js::jit::AtomicOperations::memmoveSafeWhenRacy(void* dest, ::memmove(dest, src, nbytes); } -template -inline T js::jit::AtomicOperations::exchangeSeqCst(T* addr, T val) { - static_assert(sizeof(T) <= 8, "atomics supported up to 8 bytes only"); -# ifdef ATOMICS_IMPLEMENTED_WITH_SYNC_INTRINSICS - T v; - __sync_synchronize(); - do { - v = *addr; - } while (__sync_val_compare_and_swap(addr, v, val) != v); - return v; -# else - T v; - __atomic_exchange(addr, &val, &v, __ATOMIC_SEQ_CST); - return v; -# endif -} - -# ifndef HAS_64BIT_ATOMICS -namespace js { -namespace jit { - -template <> -inline int64_t AtomicOperations::exchangeSeqCst(int64_t* addr, int64_t val) { - MOZ_CRASH("No 64-bit atomics"); -} - -template <> -inline uint64_t AtomicOperations::exchangeSeqCst(uint64_t* addr, uint64_t val) { - MOZ_CRASH("No 64-bit atomics"); -} - -} // namespace jit -} // namespace js -# endif - -#else - -# error "Either use GCC or Clang, or add code here" - -#endif - #undef ATOMICS_IMPLEMENTED_WITH_SYNC_INTRINSICS -#undef GNUC_COMPATIBLE #undef HAS_64BIT_ATOMICS #undef HAS_64BIT_LOCKFREE -#endif // jit_none_AtomicOperations_feeling_lucky_h +#endif // jit_shared_AtomicOperations_feeling_lucky_gcc_h diff --git a/js/src/jit/arm64/AtomicOperations-arm64-msvc.h b/js/src/jit/shared/AtomicOperations-feeling-lucky-msvc.h similarity index 90% rename from js/src/jit/arm64/AtomicOperations-arm64-msvc.h rename to js/src/jit/shared/AtomicOperations-feeling-lucky-msvc.h index 69b6dc424a926..7ca961bbdf5ec 100644 --- a/js/src/jit/arm64/AtomicOperations-arm64-msvc.h +++ b/js/src/jit/shared/AtomicOperations-feeling-lucky-msvc.h @@ -4,22 +4,26 @@ * License, v. 2.0. If a copy of the MPL was not distributed with this * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ -#ifndef jit_shared_AtomicOperations_x86_shared_msvc_h -#define jit_shared_AtomicOperations_x86_shared_msvc_h +#ifndef jit_shared_AtomicOperations_feeling_lucky_msvc_h +#define jit_shared_AtomicOperations_feeling_lucky_msvc_h #include "mozilla/Assertions.h" #include "mozilla/Types.h" +// Explicitly exclude tier-1 platforms. + +#if ((defined(__x86_64__) || defined(_M_X64)) && defined(JS_CODEGEN_X64)) || \ + ((defined(__i386__) || defined(_M_IX86)) && defined(JS_CODEGEN_X86)) || \ + (defined(__arm__) && defined(JS_CODEGEN_ARM)) || \ + ((defined(__aarch64__) || defined(_M_ARM64)) && defined(JS_CODEGEN_ARM64)) +# error "Do not use this code on a tier-1 platform when a JIT is available" +#endif + #if !defined(_MSC_VER) # error "This file only for Microsoft Visual C++" #endif -// For overall documentation, see jit/AtomicOperations.h/ -// -// For general comments on lock-freedom, access-atomicity, and related matters -// on x86 and x64, notably for justification of the implementations of the -// 64-bit primitives on 32-bit systems, see the comment block in -// AtomicOperations-x86-shared-gcc.h. +// For overall documentation, see jit/AtomicOperations.h. // Below, _ReadWriteBarrier is a compiler directive, preventing reordering of // instructions and reuse of memory values across it in the compiler, but having @@ -30,9 +34,7 @@ // 32-bit operations, and 64-bit operations on 64-bit systems) and otherwise // falls back on CMPXCHG8B for 64-bit operations on 32-bit systems. We could be // using those functions in many cases here (though not all). I have not done -// so because (a) I don't yet know how far back those functions are supported -// and (b) I expect we'll end up dropping into assembler here eventually so as -// to guarantee that the C++ compiler won't optimize the code. +// so because I don't yet know how far back those functions are supported. // Note, _InterlockedCompareExchange takes the *new* value as the second // argument and the *comparand* (expected old value) as the third argument. @@ -62,14 +64,19 @@ inline bool js::jit::AtomicOperations::isLockfree8() { inline void js::jit::AtomicOperations::fenceSeqCst() { _ReadWriteBarrier(); +#if defined(_M_IX86) || defined(_M_X64) + _mm_mfence(); +#elif defined(_M_ARM64) // MemoryBarrier is defined in winnt.h, which we don't want to include here. // This expression is the expansion of MemoryBarrier. __dmb(_ARM64_BARRIER_SY); +#else +#error "Unknown hardware for MSVC" +#endif } template inline T js::jit::AtomicOperations::loadSeqCst(T* addr) { - MOZ_ASSERT(tier1Constraints(addr)); _ReadWriteBarrier(); T v = *addr; _ReadWriteBarrier(); @@ -83,7 +90,6 @@ namespace jit { # define MSC_LOADOP(T) \ template <> \ inline T AtomicOperations::loadSeqCst(T* addr) { \ - MOZ_ASSERT(tier1Constraints(addr)); \ _ReadWriteBarrier(); \ return (T)_InterlockedCompareExchange64((__int64 volatile*)addr, 0, 0); \ } @@ -99,7 +105,6 @@ MSC_LOADOP(uint64_t) template inline void js::jit::AtomicOperations::storeSeqCst(T* addr, T val) { - MOZ_ASSERT(tier1Constraints(addr)); _ReadWriteBarrier(); *addr = val; fenceSeqCst(); @@ -112,7 +117,6 @@ namespace jit { # define MSC_STOREOP(T) \ template <> \ inline void AtomicOperations::storeSeqCst(T* addr, T val) { \ - MOZ_ASSERT(tier1Constraints(addr)); \ _ReadWriteBarrier(); \ T oldval = *addr; \ for (;;) { \ @@ -136,7 +140,6 @@ MSC_STOREOP(uint64_t) #define MSC_EXCHANGEOP(T, U, xchgop) \ template <> \ inline T AtomicOperations::exchangeSeqCst(T* addr, T val) { \ - MOZ_ASSERT(tier1Constraints(addr)); \ return (T)xchgop((U volatile*)addr, (U)val); \ } @@ -144,7 +147,6 @@ MSC_STOREOP(uint64_t) # define MSC_EXCHANGEOP_CAS(T) \ template <> \ inline T AtomicOperations::exchangeSeqCst(T* addr, T val) { \ - MOZ_ASSERT(tier1Constraints(addr)); \ _ReadWriteBarrier(); \ T oldval = *addr; \ for (;;) { \ @@ -186,7 +188,6 @@ MSC_EXCHANGEOP(uint64_t, __int64, _InterlockedExchange64) template <> \ inline T AtomicOperations::compareExchangeSeqCst(T* addr, T oldval, \ T newval) { \ - MOZ_ASSERT(tier1Constraints(addr)); \ return (T)cmpxchg((U volatile*)addr, (U)newval, (U)oldval); \ } @@ -210,7 +211,6 @@ MSC_CAS(uint64_t, __int64, _InterlockedCompareExchange64) #define MSC_FETCHADDOP(T, U, xadd) \ template <> \ inline T AtomicOperations::fetchAddSeqCst(T* addr, T val) { \ - MOZ_ASSERT(tier1Constraints(addr)); \ return (T)xadd((U volatile*)addr, (U)val); \ } @@ -224,7 +224,6 @@ MSC_CAS(uint64_t, __int64, _InterlockedCompareExchange64) # define MSC_FETCHADDOP_CAS(T) \ template <> \ inline T AtomicOperations::fetchAddSeqCst(T* addr, T val) { \ - MOZ_ASSERT(tier1Constraints(addr)); \ _ReadWriteBarrier(); \ T oldval = *addr; \ for (;;) { \ @@ -276,7 +275,6 @@ MSC_FETCHSUBOP(uint64_t) #define MSC_FETCHBITOPX(T, U, name, op) \ template <> \ inline T AtomicOperations::name(T* addr, T val) { \ - MOZ_ASSERT(tier1Constraints(addr)); \ return (T)op((U volatile*)addr, (U)val); \ } @@ -292,7 +290,6 @@ MSC_FETCHSUBOP(uint64_t) # define MSC_FETCHBITOPX_CAS(T, name, OP) \ template <> \ inline T AtomicOperations::name(T* addr, T val) { \ - MOZ_ASSERT(tier1Constraints(addr)); \ _ReadWriteBarrier(); \ T oldval = *addr; \ for (;;) { \ @@ -347,7 +344,6 @@ MSC_FETCHBITOP(uint64_t, __int64, _InterlockedAnd64, _InterlockedOr64, template inline T js::jit::AtomicOperations::loadSafeWhenRacy(T* addr) { - MOZ_ASSERT(tier1Constraints(addr)); // This is also appropriate for double, int64, and uint64 on 32-bit // platforms since there are no guarantees of access-atomicity. return *addr; @@ -355,7 +351,6 @@ inline T js::jit::AtomicOperations::loadSafeWhenRacy(T* addr) { template inline void js::jit::AtomicOperations::storeSafeWhenRacy(T* addr, T val) { - MOZ_ASSERT(tier1Constraints(addr)); // This is also appropriate for double, int64, and uint64 on 32-bit // platforms since there are no guarantees of access-atomicity. *addr = val; @@ -375,4 +370,4 @@ inline void js::jit::AtomicOperations::memmoveSafeWhenRacy(void* dest, ::memmove(dest, src, nbytes); } -#endif // jit_shared_AtomicOperations_x86_shared_msvc_h +#endif // jit_shared_AtomicOperations_feeling_lucky_msvc_h diff --git a/js/src/jit/shared/AtomicOperations-feeling-lucky.h b/js/src/jit/shared/AtomicOperations-feeling-lucky.h new file mode 100644 index 0000000000000..a399f271ae752 --- /dev/null +++ b/js/src/jit/shared/AtomicOperations-feeling-lucky.h @@ -0,0 +1,19 @@ +/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 4 -*- + * vim: set ts=8 sts=4 et sw=4 tw=99: + * This Source Code Form is subject to the terms of the Mozilla Public + * License, v. 2.0. If a copy of the MPL was not distributed with this + * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ + +#ifndef jit_shared_AtomicOperations_feeling_lucky_h +#define jit_shared_AtomicOperations_feeling_lucky_h + +#if defined(__clang__) || defined(__GNUC__) +# include "jit/shared/AtomicOperations-feeling-lucky-gcc.h" +#elif defined(_MSC_VER) +# include "jit/shared/AtomicOperations-feeling-lucky-msvc.h" +#else +# error "No AtomicOperations support for this platform+compiler combination" +#endif + +#endif // jit_shared_AtomicOperations_feeling_lucky_h + diff --git a/js/src/jit/x86-shared/AtomicOperations-x86-shared-gcc.h b/js/src/jit/x86-shared/AtomicOperations-x86-shared-gcc.h deleted file mode 100644 index 9ed4975169a68..0000000000000 --- a/js/src/jit/x86-shared/AtomicOperations-x86-shared-gcc.h +++ /dev/null @@ -1,244 +0,0 @@ -/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- - * vim: set ts=8 sts=2 et sw=2 tw=80: - * This Source Code Form is subject to the terms of the Mozilla Public - * License, v. 2.0. If a copy of the MPL was not distributed with this - * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ - -/* For overall documentation, see jit/AtomicOperations.h */ - -#ifndef jit_shared_AtomicOperations_x86_shared_gcc_h -#define jit_shared_AtomicOperations_x86_shared_gcc_h - -#include "mozilla/Assertions.h" -#include "mozilla/Types.h" - -#include "vm/ArrayBufferObject.h" - -#if !defined(__clang__) && !defined(__GNUC__) -# error "This file only for gcc-compatible compilers" -#endif - -// Lock-freedom and access-atomicity on x86 and x64. -// -// In general, aligned accesses are access-atomic up to 8 bytes ever since the -// Pentium; Firefox requires SSE2, which was introduced with the Pentium 4, so -// we may assume access-atomicity. -// -// Four-byte accesses and smaller are simple: -// - Use MOV{B,W,L} to load and store. Stores require a post-fence -// for sequential consistency as defined by the JS spec. The fence -// can be MFENCE, or the store can be implemented using XCHG. -// - For compareExchange use LOCK; CMPXCGH{B,W,L} -// - For exchange, use XCHG{B,W,L} -// - For add, etc use LOCK; ADD{B,W,L} etc -// -// Eight-byte accesses are easy on x64: -// - Use MOVQ to load and store (again with a fence for the store) -// - For compareExchange, we use CMPXCHGQ -// - For exchange, we use XCHGQ -// - For add, etc use LOCK; ADDQ etc -// -// Eight-byte accesses are harder on x86: -// - For load, use a sequence of MOVL + CMPXCHG8B -// - For store, use a sequence of MOVL + a CMPXCGH8B in a loop, -// no additional fence required -// - For exchange, do as for store -// - For add, etc do as for store - -// Firefox requires gcc > 4.8, so we will always have the __atomic intrinsics -// added for use in C++11 . -// -// Note that using these intrinsics for most operations is not correct: the code -// has undefined behavior. The gcc documentation states that the compiler -// assumes the code is race free. This supposedly means C++ will allow some -// instruction reorderings (effectively those allowed by TSO) even for seq_cst -// ordered operations, but these reorderings are not allowed by JS. To do -// better we will end up with inline assembler or JIT-generated code. - -// For now, we require that the C++ compiler's atomics are lock free, even for -// 64-bit accesses. - -inline bool js::jit::AtomicOperations::Initialize() { - // Nothing - return true; -} - -inline void js::jit::AtomicOperations::ShutDown() { - // Nothing -} - -// When compiling with Clang on 32-bit linux it will be necessary to link with -// -latomic to get the proper 64-bit intrinsics. - -inline bool js::jit::AtomicOperations::hasAtomic8() { return true; } - -inline bool js::jit::AtomicOperations::isLockfree8() { - MOZ_ASSERT(__atomic_always_lock_free(sizeof(int8_t), 0)); - MOZ_ASSERT(__atomic_always_lock_free(sizeof(int16_t), 0)); - MOZ_ASSERT(__atomic_always_lock_free(sizeof(int32_t), 0)); - MOZ_ASSERT(__atomic_always_lock_free(sizeof(int64_t), 0)); - return true; -} - -inline void js::jit::AtomicOperations::fenceSeqCst() { - __atomic_thread_fence(__ATOMIC_SEQ_CST); -} - -template -inline T js::jit::AtomicOperations::loadSeqCst(T* addr) { - MOZ_ASSERT(tier1Constraints(addr)); - T v; - __atomic_load(addr, &v, __ATOMIC_SEQ_CST); - return v; -} - -template -inline void js::jit::AtomicOperations::storeSeqCst(T* addr, T val) { - MOZ_ASSERT(tier1Constraints(addr)); - __atomic_store(addr, &val, __ATOMIC_SEQ_CST); -} - -template -inline T js::jit::AtomicOperations::exchangeSeqCst(T* addr, T val) { - MOZ_ASSERT(tier1Constraints(addr)); - T v; - __atomic_exchange(addr, &val, &v, __ATOMIC_SEQ_CST); - return v; -} - -template -inline T js::jit::AtomicOperations::compareExchangeSeqCst(T* addr, T oldval, - T newval) { - MOZ_ASSERT(tier1Constraints(addr)); - __atomic_compare_exchange(addr, &oldval, &newval, false, __ATOMIC_SEQ_CST, - __ATOMIC_SEQ_CST); - return oldval; -} - -template -inline T js::jit::AtomicOperations::fetchAddSeqCst(T* addr, T val) { - MOZ_ASSERT(tier1Constraints(addr)); - return __atomic_fetch_add(addr, val, __ATOMIC_SEQ_CST); -} - -template -inline T js::jit::AtomicOperations::fetchSubSeqCst(T* addr, T val) { - MOZ_ASSERT(tier1Constraints(addr)); - return __atomic_fetch_sub(addr, val, __ATOMIC_SEQ_CST); -} - -template -inline T js::jit::AtomicOperations::fetchAndSeqCst(T* addr, T val) { - MOZ_ASSERT(tier1Constraints(addr)); - return __atomic_fetch_and(addr, val, __ATOMIC_SEQ_CST); -} - -template -inline T js::jit::AtomicOperations::fetchOrSeqCst(T* addr, T val) { - MOZ_ASSERT(tier1Constraints(addr)); - return __atomic_fetch_or(addr, val, __ATOMIC_SEQ_CST); -} - -template -inline T js::jit::AtomicOperations::fetchXorSeqCst(T* addr, T val) { - MOZ_ASSERT(tier1Constraints(addr)); - return __atomic_fetch_xor(addr, val, __ATOMIC_SEQ_CST); -} - -template -inline T js::jit::AtomicOperations::loadSafeWhenRacy(T* addr) { - MOZ_ASSERT(tier1Constraints(addr)); - T v; - __atomic_load(addr, &v, __ATOMIC_RELAXED); - return v; -} - -namespace js { -namespace jit { - -#define GCC_RACYLOADOP(T) \ - template <> \ - inline T js::jit::AtomicOperations::loadSafeWhenRacy(T* addr) { \ - return *addr; \ - } - -// On 32-bit platforms, loadSafeWhenRacy need not be access-atomic for 64-bit -// data, so just use regular accesses instead of the expensive __atomic_load -// solution which must use CMPXCHG8B. -#ifndef JS_64BIT -GCC_RACYLOADOP(int64_t) -GCC_RACYLOADOP(uint64_t) -#endif - -// Float and double accesses are not access-atomic. -GCC_RACYLOADOP(float) -GCC_RACYLOADOP(double) - -// Clang requires a specialization for uint8_clamped. -template <> -inline uint8_clamped js::jit::AtomicOperations::loadSafeWhenRacy( - uint8_clamped* addr) { - uint8_t v; - __atomic_load(&addr->val, &v, __ATOMIC_RELAXED); - return uint8_clamped(v); -} - -#undef GCC_RACYLOADOP - -} // namespace jit -} // namespace js - -template -inline void js::jit::AtomicOperations::storeSafeWhenRacy(T* addr, T val) { - MOZ_ASSERT(tier1Constraints(addr)); - __atomic_store(addr, &val, __ATOMIC_RELAXED); -} - -namespace js { -namespace jit { - -#define GCC_RACYSTOREOP(T) \ - template <> \ - inline void js::jit::AtomicOperations::storeSafeWhenRacy(T* addr, T val) { \ - *addr = val; \ - } - -// On 32-bit platforms, storeSafeWhenRacy need not be access-atomic for 64-bit -// data, so just use regular accesses instead of the expensive __atomic_store -// solution which must use CMPXCHG8B. -#ifndef JS_64BIT -GCC_RACYSTOREOP(int64_t) -GCC_RACYSTOREOP(uint64_t) -#endif - -// Float and double accesses are not access-atomic. -GCC_RACYSTOREOP(float) -GCC_RACYSTOREOP(double) - -// Clang requires a specialization for uint8_clamped. -template <> -inline void js::jit::AtomicOperations::storeSafeWhenRacy(uint8_clamped* addr, - uint8_clamped val) { - __atomic_store(&addr->val, &val.val, __ATOMIC_RELAXED); -} - -#undef GCC_RACYSTOREOP - -} // namespace jit -} // namespace js - -inline void js::jit::AtomicOperations::memcpySafeWhenRacy(void* dest, - const void* src, - size_t nbytes) { - MOZ_ASSERT(!((char*)dest <= (char*)src && (char*)src < (char*)dest + nbytes)); - MOZ_ASSERT(!((char*)src <= (char*)dest && (char*)dest < (char*)src + nbytes)); - ::memcpy(dest, src, nbytes); -} - -inline void js::jit::AtomicOperations::memmoveSafeWhenRacy(void* dest, - const void* src, - size_t nbytes) { - ::memmove(dest, src, nbytes); -} - -#endif // jit_shared_AtomicOperations_x86_shared_gcc_h diff --git a/js/src/jit/x86-shared/AtomicOperations-x86-shared-msvc.h b/js/src/jit/x86-shared/AtomicOperations-x86-shared-msvc.h deleted file mode 100644 index a6ac141fc2c6b..0000000000000 --- a/js/src/jit/x86-shared/AtomicOperations-x86-shared-msvc.h +++ /dev/null @@ -1,376 +0,0 @@ -/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- - * vim: set ts=8 sts=2 et sw=2 tw=80: - * This Source Code Form is subject to the terms of the Mozilla Public - * License, v. 2.0. If a copy of the MPL was not distributed with this - * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ - -#ifndef jit_shared_AtomicOperations_x86_shared_msvc_h -#define jit_shared_AtomicOperations_x86_shared_msvc_h - -#include "mozilla/Assertions.h" -#include "mozilla/Types.h" - -#if !defined(_MSC_VER) -# error "This file only for Microsoft Visual C++" -#endif - -// For overall documentation, see jit/AtomicOperations.h/ -// -// For general comments on lock-freedom, access-atomicity, and related matters -// on x86 and x64, notably for justification of the implementations of the -// 64-bit primitives on 32-bit systems, see the comment block in -// AtomicOperations-x86-shared-gcc.h. - -// Below, _ReadWriteBarrier is a compiler directive, preventing reordering of -// instructions and reuse of memory values across it in the compiler, but having -// no impact on what the CPU does. - -// Note, here we use MSVC intrinsics directly. But MSVC supports a slightly -// higher level of function which uses the intrinsic when possible (8, 16, and -// 32-bit operations, and 64-bit operations on 64-bit systems) and otherwise -// falls back on CMPXCHG8B for 64-bit operations on 32-bit systems. We could be -// using those functions in many cases here (though not all). I have not done -// so because (a) I don't yet know how far back those functions are supported -// and (b) I expect we'll end up dropping into assembler here eventually so as -// to guarantee that the C++ compiler won't optimize the code. - -// Note, _InterlockedCompareExchange takes the *new* value as the second -// argument and the *comparand* (expected old value) as the third argument. - -inline bool js::jit::AtomicOperations::Initialize() { - // Nothing - return true; -} - -inline void js::jit::AtomicOperations::ShutDown() { - // Nothing -} - -inline bool js::jit::AtomicOperations::hasAtomic8() { return true; } - -inline bool js::jit::AtomicOperations::isLockfree8() { - // The MSDN docs suggest very strongly that if code is compiled for Pentium - // or better the 64-bit primitives will be lock-free, see eg the "Remarks" - // secion of the page for _InterlockedCompareExchange64, currently here: - // https://msdn.microsoft.com/en-us/library/ttk2z1ws%28v=vs.85%29.aspx - // - // But I've found no way to assert that at compile time or run time, there - // appears to be no WinAPI is_lock_free() test. - - return true; -} - -inline void js::jit::AtomicOperations::fenceSeqCst() { - _ReadWriteBarrier(); - _mm_mfence(); -} - -template -inline T js::jit::AtomicOperations::loadSeqCst(T* addr) { - MOZ_ASSERT(tier1Constraints(addr)); - _ReadWriteBarrier(); - T v = *addr; - _ReadWriteBarrier(); - return v; -} - -#ifdef _M_IX86 -namespace js { -namespace jit { - -# define MSC_LOADOP(T) \ - template <> \ - inline T AtomicOperations::loadSeqCst(T* addr) { \ - MOZ_ASSERT(tier1Constraints(addr)); \ - _ReadWriteBarrier(); \ - return (T)_InterlockedCompareExchange64((__int64 volatile*)addr, 0, 0); \ - } - -MSC_LOADOP(int64_t) -MSC_LOADOP(uint64_t) - -# undef MSC_LOADOP - -} // namespace jit -} // namespace js -#endif // _M_IX86 - -template -inline void js::jit::AtomicOperations::storeSeqCst(T* addr, T val) { - MOZ_ASSERT(tier1Constraints(addr)); - _ReadWriteBarrier(); - *addr = val; - fenceSeqCst(); -} - -#ifdef _M_IX86 -namespace js { -namespace jit { - -# define MSC_STOREOP(T) \ - template <> \ - inline void AtomicOperations::storeSeqCst(T* addr, T val) { \ - MOZ_ASSERT(tier1Constraints(addr)); \ - _ReadWriteBarrier(); \ - T oldval = *addr; \ - for (;;) { \ - T nextval = (T)_InterlockedCompareExchange64( \ - (__int64 volatile*)addr, (__int64)val, (__int64)oldval); \ - if (nextval == oldval) break; \ - oldval = nextval; \ - } \ - _ReadWriteBarrier(); \ - } - -MSC_STOREOP(int64_t) -MSC_STOREOP(uint64_t) - -# undef MSC_STOREOP - -} // namespace jit -} // namespace js -#endif // _M_IX86 - -#define MSC_EXCHANGEOP(T, U, xchgop) \ - template <> \ - inline T AtomicOperations::exchangeSeqCst(T* addr, T val) { \ - MOZ_ASSERT(tier1Constraints(addr)); \ - return (T)xchgop((U volatile*)addr, (U)val); \ - } - -#ifdef _M_IX86 -# define MSC_EXCHANGEOP_CAS(T) \ - template <> \ - inline T AtomicOperations::exchangeSeqCst(T* addr, T val) { \ - MOZ_ASSERT(tier1Constraints(addr)); \ - _ReadWriteBarrier(); \ - T oldval = *addr; \ - for (;;) { \ - T nextval = (T)_InterlockedCompareExchange64( \ - (__int64 volatile*)addr, (__int64)val, (__int64)oldval); \ - if (nextval == oldval) break; \ - oldval = nextval; \ - } \ - _ReadWriteBarrier(); \ - return oldval; \ - } -#endif // _M_IX86 - -namespace js { -namespace jit { - -MSC_EXCHANGEOP(int8_t, char, _InterlockedExchange8) -MSC_EXCHANGEOP(uint8_t, char, _InterlockedExchange8) -MSC_EXCHANGEOP(int16_t, short, _InterlockedExchange16) -MSC_EXCHANGEOP(uint16_t, short, _InterlockedExchange16) -MSC_EXCHANGEOP(int32_t, long, _InterlockedExchange) -MSC_EXCHANGEOP(uint32_t, long, _InterlockedExchange) - -#ifdef _M_IX86 -MSC_EXCHANGEOP_CAS(int64_t) -MSC_EXCHANGEOP_CAS(uint64_t) -#else -MSC_EXCHANGEOP(int64_t, __int64, _InterlockedExchange64) -MSC_EXCHANGEOP(uint64_t, __int64, _InterlockedExchange64) -#endif - -} // namespace jit -} // namespace js - -#undef MSC_EXCHANGEOP -#undef MSC_EXCHANGEOP_CAS - -#define MSC_CAS(T, U, cmpxchg) \ - template <> \ - inline T AtomicOperations::compareExchangeSeqCst(T* addr, T oldval, \ - T newval) { \ - MOZ_ASSERT(tier1Constraints(addr)); \ - return (T)cmpxchg((U volatile*)addr, (U)newval, (U)oldval); \ - } - -namespace js { -namespace jit { - -MSC_CAS(int8_t, char, _InterlockedCompareExchange8) -MSC_CAS(uint8_t, char, _InterlockedCompareExchange8) -MSC_CAS(int16_t, short, _InterlockedCompareExchange16) -MSC_CAS(uint16_t, short, _InterlockedCompareExchange16) -MSC_CAS(int32_t, long, _InterlockedCompareExchange) -MSC_CAS(uint32_t, long, _InterlockedCompareExchange) -MSC_CAS(int64_t, __int64, _InterlockedCompareExchange64) -MSC_CAS(uint64_t, __int64, _InterlockedCompareExchange64) - -} // namespace jit -} // namespace js - -#undef MSC_CAS - -#define MSC_FETCHADDOP(T, U, xadd) \ - template <> \ - inline T AtomicOperations::fetchAddSeqCst(T* addr, T val) { \ - MOZ_ASSERT(tier1Constraints(addr)); \ - return (T)xadd((U volatile*)addr, (U)val); \ - } - -#define MSC_FETCHSUBOP(T) \ - template <> \ - inline T AtomicOperations::fetchSubSeqCst(T* addr, T val) { \ - return fetchAddSeqCst(addr, (T)(0 - val)); \ - } - -#ifdef _M_IX86 -# define MSC_FETCHADDOP_CAS(T) \ - template <> \ - inline T AtomicOperations::fetchAddSeqCst(T* addr, T val) { \ - MOZ_ASSERT(tier1Constraints(addr)); \ - _ReadWriteBarrier(); \ - T oldval = *addr; \ - for (;;) { \ - T nextval = (T)_InterlockedCompareExchange64((__int64 volatile*)addr, \ - (__int64)(oldval + val), \ - (__int64)oldval); \ - if (nextval == oldval) break; \ - oldval = nextval; \ - } \ - _ReadWriteBarrier(); \ - return oldval; \ - } -#endif // _M_IX86 - -namespace js { -namespace jit { - -MSC_FETCHADDOP(int8_t, char, _InterlockedExchangeAdd8) -MSC_FETCHADDOP(uint8_t, char, _InterlockedExchangeAdd8) -MSC_FETCHADDOP(int16_t, short, _InterlockedExchangeAdd16) -MSC_FETCHADDOP(uint16_t, short, _InterlockedExchangeAdd16) -MSC_FETCHADDOP(int32_t, long, _InterlockedExchangeAdd) -MSC_FETCHADDOP(uint32_t, long, _InterlockedExchangeAdd) - -#ifdef _M_IX86 -MSC_FETCHADDOP_CAS(int64_t) -MSC_FETCHADDOP_CAS(uint64_t) -#else -MSC_FETCHADDOP(int64_t, __int64, _InterlockedExchangeAdd64) -MSC_FETCHADDOP(uint64_t, __int64, _InterlockedExchangeAdd64) -#endif - -MSC_FETCHSUBOP(int8_t) -MSC_FETCHSUBOP(uint8_t) -MSC_FETCHSUBOP(int16_t) -MSC_FETCHSUBOP(uint16_t) -MSC_FETCHSUBOP(int32_t) -MSC_FETCHSUBOP(uint32_t) -MSC_FETCHSUBOP(int64_t) -MSC_FETCHSUBOP(uint64_t) - -} // namespace jit -} // namespace js - -#undef MSC_FETCHADDOP -#undef MSC_FETCHADDOP_CAS -#undef MSC_FETCHSUBOP - -#define MSC_FETCHBITOPX(T, U, name, op) \ - template <> \ - inline T AtomicOperations::name(T* addr, T val) { \ - MOZ_ASSERT(tier1Constraints(addr)); \ - return (T)op((U volatile*)addr, (U)val); \ - } - -#define MSC_FETCHBITOP(T, U, andop, orop, xorop) \ - MSC_FETCHBITOPX(T, U, fetchAndSeqCst, andop) \ - MSC_FETCHBITOPX(T, U, fetchOrSeqCst, orop) \ - MSC_FETCHBITOPX(T, U, fetchXorSeqCst, xorop) - -#ifdef _M_IX86 -# define AND_OP & -# define OR_OP | -# define XOR_OP ^ -# define MSC_FETCHBITOPX_CAS(T, name, OP) \ - template <> \ - inline T AtomicOperations::name(T* addr, T val) { \ - MOZ_ASSERT(tier1Constraints(addr)); \ - _ReadWriteBarrier(); \ - T oldval = *addr; \ - for (;;) { \ - T nextval = (T)_InterlockedCompareExchange64((__int64 volatile*)addr, \ - (__int64)(oldval OP val), \ - (__int64)oldval); \ - if (nextval == oldval) break; \ - oldval = nextval; \ - } \ - _ReadWriteBarrier(); \ - return oldval; \ - } - -# define MSC_FETCHBITOP_CAS(T) \ - MSC_FETCHBITOPX_CAS(T, fetchAndSeqCst, AND_OP) \ - MSC_FETCHBITOPX_CAS(T, fetchOrSeqCst, OR_OP) \ - MSC_FETCHBITOPX_CAS(T, fetchXorSeqCst, XOR_OP) - -#endif - -namespace js { -namespace jit { - -MSC_FETCHBITOP(int8_t, char, _InterlockedAnd8, _InterlockedOr8, - _InterlockedXor8) -MSC_FETCHBITOP(uint8_t, char, _InterlockedAnd8, _InterlockedOr8, - _InterlockedXor8) -MSC_FETCHBITOP(int16_t, short, _InterlockedAnd16, _InterlockedOr16, - _InterlockedXor16) -MSC_FETCHBITOP(uint16_t, short, _InterlockedAnd16, _InterlockedOr16, - _InterlockedXor16) -MSC_FETCHBITOP(int32_t, long, _InterlockedAnd, _InterlockedOr, _InterlockedXor) -MSC_FETCHBITOP(uint32_t, long, _InterlockedAnd, _InterlockedOr, _InterlockedXor) - -#ifdef _M_IX86 -MSC_FETCHBITOP_CAS(int64_t) -MSC_FETCHBITOP_CAS(uint64_t) -#else -MSC_FETCHBITOP(int64_t, __int64, _InterlockedAnd64, _InterlockedOr64, - _InterlockedXor64) -MSC_FETCHBITOP(uint64_t, __int64, _InterlockedAnd64, _InterlockedOr64, - _InterlockedXor64) -#endif - -} // namespace jit -} // namespace js - -#undef MSC_FETCHBITOPX_CAS -#undef MSC_FETCHBITOPX -#undef MSC_FETCHBITOP_CAS -#undef MSC_FETCHBITOP - -template -inline T js::jit::AtomicOperations::loadSafeWhenRacy(T* addr) { - MOZ_ASSERT(tier1Constraints(addr)); - // This is also appropriate for double, int64, and uint64 on 32-bit - // platforms since there are no guarantees of access-atomicity. - return *addr; -} - -template -inline void js::jit::AtomicOperations::storeSafeWhenRacy(T* addr, T val) { - MOZ_ASSERT(tier1Constraints(addr)); - // This is also appropriate for double, int64, and uint64 on 32-bit - // platforms since there are no guarantees of access-atomicity. - *addr = val; -} - -inline void js::jit::AtomicOperations::memcpySafeWhenRacy(void* dest, - const void* src, - size_t nbytes) { - MOZ_ASSERT(!((char*)dest <= (char*)src && (char*)src < (char*)dest + nbytes)); - MOZ_ASSERT(!((char*)src <= (char*)dest && (char*)dest < (char*)src + nbytes)); - ::memcpy(dest, src, nbytes); -} - -inline void js::jit::AtomicOperations::memmoveSafeWhenRacy(void* dest, - const void* src, - size_t nbytes) { - ::memmove(dest, src, nbytes); -} - -#endif // jit_shared_AtomicOperations_x86_shared_msvc_h From 223df12d99555a5b36dfe08b43c366337d8aacdf Mon Sep 17 00:00:00 2001 From: Ciure Andrei Date: Mon, 21 Jan 2019 14:26:34 +0200 Subject: [PATCH 3/9] Backed out 2 changesets (bug 1394420) for failing testAtomicOperations.cpp, ESling and jit failures CLOSED TREE Backed out changeset b2ffeeac7326 (bug 1394420) Backed out changeset 2f5be1913934 (bug 1394420) --HG-- rename : js/src/jit/shared/AtomicOperations-feeling-lucky-gcc.h => js/src/jit/none/AtomicOperations-feeling-lucky.h rename : js/src/jit/shared/AtomicOperations-feeling-lucky-msvc.h => js/src/jit/x86-shared/AtomicOperations-x86-shared-msvc.h --- .../jit-test/tests/atomics/memcpy-fidelity.js | 181 --- js/src/jit/AtomicOperations.h | 73 +- js/src/jit/MacroAssembler.h | 48 +- js/src/jit/arm/AtomicOperations-arm.h | 221 ++++ js/src/jit/arm/MacroAssembler-arm.cpp | 29 +- js/src/jit/arm64/AtomicOperations-arm64-gcc.h | 152 +++ .../AtomicOperations-arm64-msvc.h} | 54 +- js/src/jit/arm64/MacroAssembler-arm64.cpp | 23 - .../AtomicOperations-mips-shared.h | 9 - js/src/jit/moz.build | 5 +- .../AtomicOperations-feeling-lucky.h} | 144 ++- .../shared/AtomicOperations-feeling-lucky.h | 19 - .../shared/AtomicOperations-shared-jit.cpp | 1018 ----------------- .../jit/shared/AtomicOperations-shared-jit.h | 605 ---------- js/src/jit/x64/MacroAssembler-x64.cpp | 50 +- js/src/jit/x86-shared/Assembler-x86-shared.h | 20 +- .../AtomicOperations-x86-shared-gcc.h | 235 ++++ .../AtomicOperations-x86-shared-msvc.h | 367 ++++++ js/src/vm/Initialization.cpp | 5 - 19 files changed, 1144 insertions(+), 2114 deletions(-) delete mode 100644 js/src/jit-test/tests/atomics/memcpy-fidelity.js create mode 100644 js/src/jit/arm/AtomicOperations-arm.h create mode 100644 js/src/jit/arm64/AtomicOperations-arm64-gcc.h rename js/src/jit/{shared/AtomicOperations-feeling-lucky-msvc.h => arm64/AtomicOperations-arm64-msvc.h} (89%) rename js/src/jit/{shared/AtomicOperations-feeling-lucky-gcc.h => none/AtomicOperations-feeling-lucky.h} (84%) delete mode 100644 js/src/jit/shared/AtomicOperations-feeling-lucky.h delete mode 100644 js/src/jit/shared/AtomicOperations-shared-jit.cpp delete mode 100644 js/src/jit/shared/AtomicOperations-shared-jit.h create mode 100644 js/src/jit/x86-shared/AtomicOperations-x86-shared-gcc.h create mode 100644 js/src/jit/x86-shared/AtomicOperations-x86-shared-msvc.h diff --git a/js/src/jit-test/tests/atomics/memcpy-fidelity.js b/js/src/jit-test/tests/atomics/memcpy-fidelity.js deleted file mode 100644 index 81eb63fba28a3..0000000000000 --- a/js/src/jit-test/tests/atomics/memcpy-fidelity.js +++ /dev/null @@ -1,181 +0,0 @@ -// In order not to run afoul of C++ UB we have our own non-C++ definitions of -// operations (they are actually jitted) that can operate racily on shared -// memory, see jit/shared/AtomicOperations-shared-jit.cpp. -// -// Operations on fixed-width 1, 2, 4, and 8 byte data are adequately tested -// elsewhere. Here we specifically test our safe-when-racy replacements of -// memcpy and memmove. -// -// There are two primitives in the engine, memcpy_down and memcpy_up. These are -// equivalent except when data overlap, in which case memcpy_down handles -// overlapping copies that move from higher to lower addresses and memcpy_up -// handles ditto from lower to higher. memcpy uses memcpy_down always while -// memmove selects the one to use dynamically based on its arguments. - -// Basic memcpy algorithm to be tested: -// -// - if src and target have the same alignment -// - byte copy up to word alignment -// - block copy as much as possible -// - word copy as much as possible -// - byte copy any tail -// - else if on a platform that can deal with unaligned access -// (ie, x86, ARM64, and ARM if the proper flag is set) -// - block copy as much as possible -// - word copy as much as possible -// - byte copy any tail -// - else // on a platform that can't deal with unaligned access -// (ie ARM without the flag or x86 DEBUG builds with the -// JS_NO_UNALIGNED_MEMCPY env var) -// - block copy with byte copies -// - word copy with byte copies -// - byte copy any tail - -var target_buf = new SharedArrayBuffer(1024); -var src_buf = new SharedArrayBuffer(1024); - -/////////////////////////////////////////////////////////////////////////// -// -// Different src and target buffer, this is memcpy "move down". The same -// code is used in the engine for overlapping buffers when target addresses -// are lower than source addresses. - -fill(src_buf); - -// Basic 1K perfectly aligned copy, copies blocks only. -{ - let target = new Uint8Array(target_buf); - let src = new Uint8Array(src_buf); - clear(target_buf); - target.set(src); - check(target_buf, 0, 1024, 0); -} - -// Buffers are equally aligned but not on a word boundary and not ending on a -// word boundary either, so this will copy first some bytes, then some blocks, -// then some words, and then some bytes. -{ - let fill = 0x79; - clear(target_buf, fill); - let target = new Uint8Array(target_buf, 1, 1022); - let src = new Uint8Array(src_buf, 1, 1022); - target.set(src); - check_fill(target_buf, 0, 1, fill); - check(target_buf, 1, 1023, 1); - check_fill(target_buf, 1023, 1024, fill); -} - -// Buffers are unequally aligned, we'll copy bytes only on some platforms and -// unaligned blocks/words on others. -{ - clear(target_buf); - let target = new Uint8Array(target_buf, 0, 1023); - let src = new Uint8Array(src_buf, 1); - target.set(src); - check(target_buf, 0, 1023, 1); - check_zero(target_buf, 1023, 1024); -} - -/////////////////////////////////////////////////////////////////////////// -// -// Overlapping src and target buffer and the target addresses are always -// higher than the source addresses, this is memcpy "move up" - -// Buffers are equally aligned but not on a word boundary and not ending on a -// word boundary either, so this will copy first some bytes, then some blocks, -// then some words, and then some bytes. -{ - fill(target_buf); - let target = new Uint8Array(target_buf, 9, 999); - let src = new Uint8Array(target_buf, 1, 999); - target.set(src); - check(target_buf, 9, 1008, 1); - check(target_buf, 1008, 1024, 1008 & 255); -} - -// Buffers are unequally aligned, we'll copy bytes only on some platforms and -// unaligned blocks/words on others. -{ - fill(target_buf); - let target = new Uint8Array(target_buf, 2, 1022); - let src = new Uint8Array(target_buf, 1, 1022); - target.set(src); - check(target_buf, 2, 1024, 1); -} - -/////////////////////////////////////////////////////////////////////////// -// -// Copy 0 to 127 bytes from and to a variety of addresses to check that we -// handle limits properly in these edge cases. - -// Too slow in debug-noopt builds but we don't want to flag the test as slow, -// since that means it'll never be run. - -if (this.getBuildConfiguration && !getBuildConfiguration().debug) -{ - let t = new Uint8Array(target_buf); - for (let my_src_buf of [src_buf, target_buf]) { - for (let size=0; size < 127; size++) { - for (let src_offs=0; src_offs < 8; src_offs++) { - for (let target_offs=0; target_offs < 8; target_offs++) { - clear(target_buf, Math.random()*255); - let target = new Uint8Array(target_buf, target_offs, size); - - // Zero is boring - let bias = (Math.random() * 100 % 12) | 0; - - // Note src may overlap target partially - let src = new Uint8Array(my_src_buf, src_offs, size); - for ( let i=0; i < size; i++ ) - src[i] = i+bias; - - // We expect these values to be unchanged by the copy - let below = target_offs > 0 ? t[target_offs - 1] : 0; - let above = t[target_offs + size]; - - // Copy - target.set(src); - - // Verify - check(target_buf, target_offs, target_offs + size, bias); - if (target_offs > 0) - assertEq(t[target_offs-1], below); - assertEq(t[target_offs+size], above); - } - } - } - } -} - - -// Utilities - -function clear(buf, fill) { - let a = new Uint8Array(buf); - for ( let i=0; i < a.length; i++ ) - a[i] = fill; -} - -function fill(buf) { - let a = new Uint8Array(buf); - for ( let i=0; i < a.length; i++ ) - a[i] = i & 255 -} - -function check(buf, from, to, startingWith) { - let a = new Uint8Array(buf); - for ( let i=from; i < to; i++ ) { - assertEq(a[i], startingWith); - startingWith = (startingWith + 1) & 255; - } -} - -function check_zero(buf, from, to) { - check_fill(buf, from, to, 0); -} - -function check_fill(buf, from, to, fill) { - let a = new Uint8Array(buf); - for ( let i=from; i < to; i++ ) - assertEq(a[i], fill); -} diff --git a/js/src/jit/AtomicOperations.h b/js/src/jit/AtomicOperations.h index ad88c238cc121..420ec8d9ccdc1 100644 --- a/js/src/jit/AtomicOperations.h +++ b/js/src/jit/AtomicOperations.h @@ -147,13 +147,6 @@ class AtomicOperations { size_t nbytes); public: - // On some platforms we generate code for the atomics at run-time; that - // happens here. - static bool Initialize(); - - // Deallocate the code segment for generated atomics functions. - static void ShutDown(); - // Test lock-freedom for any int32 value. This implements the // Atomics::isLockFree() operation in the ECMAScript Shared Memory and // Atomics specification, as follows: @@ -279,6 +272,24 @@ class AtomicOperations { size_t nelem) { memmoveSafeWhenRacy(dest, src, nelem * sizeof(T)); } + +#ifdef DEBUG + // Constraints that must hold for atomic operations on all tier-1 platforms: + // + // - atomic cells can be 1, 2, 4, or 8 bytes + // - all atomic operations are lock-free, including 8-byte operations + // - atomic operations can only be performed on naturally aligned cells + // + // (Tier-2 and tier-3 platforms need not support 8-byte atomics, and if they + // do, they need not be lock-free.) + + template + static bool tier1Constraints(const T* addr) { + static_assert(sizeof(T) <= 8, "atomics supported up to 8 bytes only"); + return (sizeof(T) < 8 || (hasAtomic8() && isLockfree8())) && + !(uintptr_t(addr) & (sizeof(T) - 1)); + } +#endif }; inline bool AtomicOperations::isLockfreeJS(int32_t size) { @@ -322,7 +333,7 @@ inline bool AtomicOperations::isLockfreeJS(int32_t size) { // - write your own support code for the platform+compiler and create a new // case below // -// - include jit/shared/AtomicOperations-feeling-lucky.h in a case for the +// - include jit/none/AtomicOperations-feeling-lucky.h in a case for the // platform below, if you have a gcc-compatible compiler and truly feel // lucky. You may have to add a little code to that file, too. // @@ -340,38 +351,52 @@ inline bool AtomicOperations::isLockfreeJS(int32_t size) { # if defined(__clang__) || defined(__GNUC__) # include "jit/mips-shared/AtomicOperations-mips-shared.h" # else -# error "AtomicOperations on MIPS-32 for unknown compiler" +# error "No AtomicOperations support for this platform+compiler combination" # endif #elif defined(__x86_64__) || defined(_M_X64) || defined(__i386__) || \ defined(_M_IX86) -# if defined(JS_CODEGEN_X86) || defined(JS_CODEGEN_X64) -# include "jit/shared/AtomicOperations-shared-jit.h" +# if defined(__clang__) || defined(__GNUC__) +# include "jit/x86-shared/AtomicOperations-x86-shared-gcc.h" +# elif defined(_MSC_VER) +# include "jit/x86-shared/AtomicOperations-x86-shared-msvc.h" # else -# include "jit/shared/AtomicOperations-feeling-lucky.h" +# error "No AtomicOperations support for this platform+compiler combination" # endif #elif defined(__arm__) -# if defined(JS_CODEGEN_ARM) -# include "jit/shared/AtomicOperations-shared-jit.h" +# if defined(__clang__) || defined(__GNUC__) +# include "jit/arm/AtomicOperations-arm.h" # else -# include "jit/shared/AtomicOperations-feeling-lucky.h" +# error "No AtomicOperations support for this platform+compiler combination" # endif #elif defined(__aarch64__) || defined(_M_ARM64) -# if defined(JS_CODEGEN_ARM64) -# include "jit/shared/AtomicOperations-shared-jit.h" +# if defined(__clang__) || defined(__GNUC__) +# include "jit/arm64/AtomicOperations-arm64-gcc.h" +# elif defined(_MSC_VER) +# include "jit/arm64/AtomicOperations-arm64-msvc.h" # else -# include "jit/shared/AtomicOperations-feeling-lucky.h" +# error "No AtomicOperations support for this platform+compiler combination" # endif #elif defined(__mips__) # if defined(__clang__) || defined(__GNUC__) # include "jit/mips-shared/AtomicOperations-mips-shared.h" # else -# error "AtomicOperations on MIPS for an unknown compiler" +# error "No AtomicOperations support for this platform+compiler combination" # endif -#elif defined(__ppc__) || defined(__PPC__) || defined(__sparc__) || \ - defined(__ppc64__) || defined(__PPC64__) || defined(__ppc64le__) || \ - defined(__PPC64LE__) || defined(__alpha__) || defined(__hppa__) || \ - defined(__sh__) || defined(__s390__) || defined(__s390x__) -# include "jit/shared/AtomicOperations-feeling-lucky.h" +#elif defined(__ppc__) || defined(__PPC__) +# include "jit/none/AtomicOperations-feeling-lucky.h" +#elif defined(__sparc__) +# include "jit/none/AtomicOperations-feeling-lucky.h" +#elif defined(__ppc64__) || defined(__PPC64__) || defined(__ppc64le__) || \ + defined(__PPC64LE__) +# include "jit/none/AtomicOperations-feeling-lucky.h" +#elif defined(__alpha__) +# include "jit/none/AtomicOperations-feeling-lucky.h" +#elif defined(__hppa__) +# include "jit/none/AtomicOperations-feeling-lucky.h" +#elif defined(__sh__) +# include "jit/none/AtomicOperations-feeling-lucky.h" +#elif defined(__s390__) || defined(__s390x__) +# include "jit/none/AtomicOperations-feeling-lucky.h" #else # error "No AtomicOperations support provided for this platform" #endif diff --git a/js/src/jit/MacroAssembler.h b/js/src/jit/MacroAssembler.h index aeffeb763619c..edbb567c9a94d 100644 --- a/js/src/jit/MacroAssembler.h +++ b/js/src/jit/MacroAssembler.h @@ -977,13 +977,13 @@ class MacroAssembler : public MacroAssemblerSpecific { // =============================================================== // Shift functions - // For shift-by-register there may be platform-specific variations, for - // example, x86 will perform the shift mod 32 but ARM will perform the shift - // mod 256. + // For shift-by-register there may be platform-specific + // variations, for example, x86 will perform the shift mod 32 but + // ARM will perform the shift mod 256. // - // For shift-by-immediate the platform assembler may restrict the immediate, - // for example, the ARM assembler requires the count for 32-bit shifts to be - // in the range [0,31]. + // For shift-by-immediate the platform assembler may restrict the + // immediate, for example, the ARM assembler requires the count + // for 32-bit shifts to be in the range [0,31]. inline void lshift32(Imm32 shift, Register srcDest) PER_SHARED_ARCH; inline void rshift32(Imm32 shift, Register srcDest) PER_SHARED_ARCH; @@ -1947,14 +1947,6 @@ class MacroAssembler : public MacroAssemblerSpecific { Register offsetTemp, Register maskTemp, Register output) DEFINED_ON(mips_shared); - // x64: `output` must be rax. - // ARM: Registers must be distinct; `replacement` and `output` must be - // (even,odd) pairs. - - void compareExchange64(const Synchronization& sync, const Address& mem, - Register64 expected, Register64 replacement, - Register64 output) DEFINED_ON(arm, arm64, x64); - // Exchange with memory. Return the value initially in memory. // MIPS: `valueTemp`, `offsetTemp` and `maskTemp` must be defined for 8-bit // and 16-bit wide operations. @@ -1977,10 +1969,6 @@ class MacroAssembler : public MacroAssemblerSpecific { Register offsetTemp, Register maskTemp, Register output) DEFINED_ON(mips_shared); - void atomicExchange64(const Synchronization& sync, const Address& mem, - Register64 value, Register64 output) - DEFINED_ON(arm64, x64); - // Read-modify-write with memory. Return the value in memory before the // operation. // @@ -2022,15 +2010,6 @@ class MacroAssembler : public MacroAssemblerSpecific { Register valueTemp, Register offsetTemp, Register maskTemp, Register output) DEFINED_ON(mips_shared); - // x64: - // For Add and Sub, `temp` must be invalid. - // For And, Or, and Xor, `output` must be eax and `temp` must have a byte - // subregister. - - void atomicFetchOp64(const Synchronization& sync, AtomicOp op, - Register64 value, const Address& mem, Register64 temp, - Register64 output) DEFINED_ON(arm64, x64); - // ======================================================================== // Wasm atomic operations. // @@ -2154,13 +2133,11 @@ class MacroAssembler : public MacroAssemblerSpecific { const BaseIndex& mem, Register64 temp, Register64 output) DEFINED_ON(arm, mips32, x86); - // x86: `expected` must be the same as `output`, and must be edx:eax. - // x86: `replacement` must be ecx:ebx. + // x86: `expected` must be the same as `output`, and must be edx:eax + // x86: `replacement` must be ecx:ebx // x64: `output` must be rax. // ARM: Registers must be distinct; `replacement` and `output` must be - // (even,odd) pairs. - // ARM64: The base register in `mem` must not overlap `output`. - // MIPS: Registers must be distinct. + // (even,odd) pairs. MIPS: Registers must be distinct. void wasmCompareExchange64(const wasm::MemoryAccessDesc& access, const Address& mem, Register64 expected, @@ -2174,8 +2151,7 @@ class MacroAssembler : public MacroAssemblerSpecific { // x86: `value` must be ecx:ebx; `output` must be edx:eax. // ARM: Registers must be distinct; `value` and `output` must be (even,odd) - // pairs. - // MIPS: Registers must be distinct. + // pairs. MIPS: Registers must be distinct. void wasmAtomicExchange64(const wasm::MemoryAccessDesc& access, const Address& mem, Register64 value, @@ -2188,9 +2164,7 @@ class MacroAssembler : public MacroAssemblerSpecific { // x86: `output` must be edx:eax, `temp` must be ecx:ebx. // x64: For And, Or, and Xor `output` must be rax. // ARM: Registers must be distinct; `temp` and `output` must be (even,odd) - // pairs. - // MIPS: Registers must be distinct. - // MIPS32: `temp` should be invalid. + // pairs. MIPS: Registers must be distinct. MIPS32: `temp` should be invalid. void wasmAtomicFetchOp64(const wasm::MemoryAccessDesc& access, AtomicOp op, Register64 value, const Address& mem, diff --git a/js/src/jit/arm/AtomicOperations-arm.h b/js/src/jit/arm/AtomicOperations-arm.h new file mode 100644 index 0000000000000..b65709b4e5667 --- /dev/null +++ b/js/src/jit/arm/AtomicOperations-arm.h @@ -0,0 +1,221 @@ +/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- + * vim: set ts=8 sts=2 et sw=2 tw=80: + * This Source Code Form is subject to the terms of the Mozilla Public + * License, v. 2.0. If a copy of the MPL was not distributed with this + * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ + +#ifndef jit_arm_AtomicOperations_arm_h +#define jit_arm_AtomicOperations_arm_h + +#include "jit/arm/Architecture-arm.h" + +#include "vm/ArrayBufferObject.h" + +// For documentation, see jit/AtomicOperations.h + +// NOTE, this file is *not* used with the ARM simulator, only when compiling for +// actual ARM hardware. The simulators get the files that are appropriate for +// the hardware the simulator is running on. See the comments before the +// #include nest at the bottom of jit/AtomicOperations.h for more information. + +// Firefox requires gcc > 4.8, so we will always have the __atomic intrinsics +// added for use in C++11 . +// +// Note that using these intrinsics for most operations is not correct: the code +// has undefined behavior. The gcc documentation states that the compiler +// assumes the code is race free. This supposedly means C++ will allow some +// instruction reorderings (effectively those allowed by TSO) even for seq_cst +// ordered operations, but these reorderings are not allowed by JS. To do +// better we will end up with inline assembler or JIT-generated code. + +#if !defined(__clang__) && !defined(__GNUC__) +# error "This file only for gcc-compatible compilers" +#endif + +inline bool js::jit::AtomicOperations::hasAtomic8() { + // This guard is really only for tier-2 and tier-3 systems: LDREXD and + // STREXD have been available since ARMv6K, and only ARMv7 and later are + // tier-1. + return HasLDSTREXBHD(); +} + +inline bool js::jit::AtomicOperations::isLockfree8() { + // The JIT and the C++ compiler must agree on whether to use atomics + // for 64-bit accesses. There are two ways to do this: either the + // JIT defers to the C++ compiler (so if the C++ code is compiled + // for ARMv6, say, and __atomic_always_lock_free(8) is false, then the + // JIT ignores the fact that the program is running on ARMv7 or newer); + // or the C++ code in this file calls out to run-time generated code + // to do whatever the JIT does. + // + // For now, make the JIT defer to the C++ compiler when we know what + // the C++ compiler will do, otherwise assume a lock is needed. + MOZ_ASSERT(__atomic_always_lock_free(sizeof(int8_t), 0)); + MOZ_ASSERT(__atomic_always_lock_free(sizeof(int16_t), 0)); + MOZ_ASSERT(__atomic_always_lock_free(sizeof(int32_t), 0)); + + return hasAtomic8() && __atomic_always_lock_free(sizeof(int64_t), 0); +} + +inline void js::jit::AtomicOperations::fenceSeqCst() { + __atomic_thread_fence(__ATOMIC_SEQ_CST); +} + +template +inline T js::jit::AtomicOperations::loadSeqCst(T* addr) { + MOZ_ASSERT(tier1Constraints(addr)); + T v; + __atomic_load(addr, &v, __ATOMIC_SEQ_CST); + return v; +} + +template +inline void js::jit::AtomicOperations::storeSeqCst(T* addr, T val) { + MOZ_ASSERT(tier1Constraints(addr)); + __atomic_store(addr, &val, __ATOMIC_SEQ_CST); +} + +template +inline T js::jit::AtomicOperations::exchangeSeqCst(T* addr, T val) { + MOZ_ASSERT(tier1Constraints(addr)); + T v; + __atomic_exchange(addr, &val, &v, __ATOMIC_SEQ_CST); + return v; +} + +template +inline T js::jit::AtomicOperations::compareExchangeSeqCst(T* addr, T oldval, + T newval) { + MOZ_ASSERT(tier1Constraints(addr)); + __atomic_compare_exchange(addr, &oldval, &newval, false, __ATOMIC_SEQ_CST, + __ATOMIC_SEQ_CST); + return oldval; +} + +template +inline T js::jit::AtomicOperations::fetchAddSeqCst(T* addr, T val) { + MOZ_ASSERT(tier1Constraints(addr)); + return __atomic_fetch_add(addr, val, __ATOMIC_SEQ_CST); +} + +template +inline T js::jit::AtomicOperations::fetchSubSeqCst(T* addr, T val) { + MOZ_ASSERT(tier1Constraints(addr)); + return __atomic_fetch_sub(addr, val, __ATOMIC_SEQ_CST); +} + +template +inline T js::jit::AtomicOperations::fetchAndSeqCst(T* addr, T val) { + MOZ_ASSERT(tier1Constraints(addr)); + return __atomic_fetch_and(addr, val, __ATOMIC_SEQ_CST); +} + +template +inline T js::jit::AtomicOperations::fetchOrSeqCst(T* addr, T val) { + MOZ_ASSERT(tier1Constraints(addr)); + return __atomic_fetch_or(addr, val, __ATOMIC_SEQ_CST); +} + +template +inline T js::jit::AtomicOperations::fetchXorSeqCst(T* addr, T val) { + MOZ_ASSERT(tier1Constraints(addr)); + return __atomic_fetch_xor(addr, val, __ATOMIC_SEQ_CST); +} + +template +inline T js::jit::AtomicOperations::loadSafeWhenRacy(T* addr) { + MOZ_ASSERT(tier1Constraints(addr)); + T v; + __atomic_load(addr, &v, __ATOMIC_RELAXED); + return v; +} + +namespace js { +namespace jit { + +#define GCC_RACYLOADOP(T) \ + template <> \ + inline T js::jit::AtomicOperations::loadSafeWhenRacy(T* addr) { \ + return *addr; \ + } + +// On 32-bit platforms, loadSafeWhenRacy need not be access-atomic for 64-bit +// data, so just use regular accesses instead of the expensive __atomic_load +// solution which must use LDREXD/CLREX. +#ifndef JS_64BIT +GCC_RACYLOADOP(int64_t) +GCC_RACYLOADOP(uint64_t) +#endif + +// Float and double accesses are not access-atomic. +GCC_RACYLOADOP(float) +GCC_RACYLOADOP(double) + +// Clang requires a specialization for uint8_clamped. +template <> +inline uint8_clamped js::jit::AtomicOperations::loadSafeWhenRacy( + uint8_clamped* addr) { + uint8_t v; + __atomic_load(&addr->val, &v, __ATOMIC_RELAXED); + return uint8_clamped(v); +} + +#undef GCC_RACYLOADOP + +} // namespace jit +} // namespace js + +template +inline void js::jit::AtomicOperations::storeSafeWhenRacy(T* addr, T val) { + MOZ_ASSERT(tier1Constraints(addr)); + __atomic_store(addr, &val, __ATOMIC_RELAXED); +} + +namespace js { +namespace jit { + +#define GCC_RACYSTOREOP(T) \ + template <> \ + inline void js::jit::AtomicOperations::storeSafeWhenRacy(T* addr, T val) { \ + *addr = val; \ + } + +// On 32-bit platforms, storeSafeWhenRacy need not be access-atomic for 64-bit +// data, so just use regular accesses instead of the expensive __atomic_store +// solution which must use LDREXD/STREXD. +#ifndef JS_64BIT +GCC_RACYSTOREOP(int64_t) +GCC_RACYSTOREOP(uint64_t) +#endif + +// Float and double accesses are not access-atomic. +GCC_RACYSTOREOP(float) +GCC_RACYSTOREOP(double) + +// Clang requires a specialization for uint8_clamped. +template <> +inline void js::jit::AtomicOperations::storeSafeWhenRacy(uint8_clamped* addr, + uint8_clamped val) { + __atomic_store(&addr->val, &val.val, __ATOMIC_RELAXED); +} + +#undef GCC_RACYSTOREOP + +} // namespace jit +} // namespace js + +inline void js::jit::AtomicOperations::memcpySafeWhenRacy(void* dest, + const void* src, + size_t nbytes) { + MOZ_ASSERT(!((char*)dest <= (char*)src && (char*)src < (char*)dest + nbytes)); + MOZ_ASSERT(!((char*)src <= (char*)dest && (char*)dest < (char*)src + nbytes)); + memcpy(dest, src, nbytes); +} + +inline void js::jit::AtomicOperations::memmoveSafeWhenRacy(void* dest, + const void* src, + size_t nbytes) { + memmove(dest, src, nbytes); +} + +#endif // jit_arm_AtomicOperations_arm_h diff --git a/js/src/jit/arm/MacroAssembler-arm.cpp b/js/src/jit/arm/MacroAssembler-arm.cpp index 17ec4e3f42122..ddc562cf26027 100644 --- a/js/src/jit/arm/MacroAssembler-arm.cpp +++ b/js/src/jit/arm/MacroAssembler-arm.cpp @@ -5290,11 +5290,10 @@ void MacroAssembler::wasmAtomicLoad64(const wasm::MemoryAccessDesc& access, } template -static void CompareExchange64(MacroAssembler& masm, - const wasm::MemoryAccessDesc* access, - const Synchronization& sync, const T& mem, - Register64 expect, Register64 replace, - Register64 output) { +static void WasmCompareExchange64(MacroAssembler& masm, + const wasm::MemoryAccessDesc& access, + const T& mem, Register64 expect, + Register64 replace, Register64 output) { MOZ_ASSERT(expect != replace && replace != output && output != expect); MOZ_ASSERT((replace.low.code() & 1) == 0); @@ -5309,13 +5308,11 @@ static void CompareExchange64(MacroAssembler& masm, SecondScratchRegisterScope scratch2(masm); Register ptr = ComputePointerForAtomic(masm, mem, scratch2); - masm.memoryBarrierBefore(sync); + masm.memoryBarrierBefore(access.sync()); masm.bind(&again); BufferOffset load = masm.as_ldrexd(output.low, output.high, ptr); - if (access) { - masm.append(*access, load.getOffset()); - } + masm.append(access, load.getOffset()); masm.as_cmp(output.low, O2Reg(expect.low)); masm.as_cmp(output.high, O2Reg(expect.high), MacroAssembler::Equal); @@ -5329,7 +5326,7 @@ static void CompareExchange64(MacroAssembler& masm, masm.as_b(&again, MacroAssembler::Equal); masm.bind(&done); - masm.memoryBarrierAfter(sync); + masm.memoryBarrierAfter(access.sync()); } void MacroAssembler::wasmCompareExchange64(const wasm::MemoryAccessDesc& access, @@ -5337,8 +5334,7 @@ void MacroAssembler::wasmCompareExchange64(const wasm::MemoryAccessDesc& access, Register64 expect, Register64 replace, Register64 output) { - CompareExchange64(*this, &access, access.sync(), mem, expect, replace, - output); + WasmCompareExchange64(*this, access, mem, expect, replace, output); } void MacroAssembler::wasmCompareExchange64(const wasm::MemoryAccessDesc& access, @@ -5346,14 +5342,7 @@ void MacroAssembler::wasmCompareExchange64(const wasm::MemoryAccessDesc& access, Register64 expect, Register64 replace, Register64 output) { - CompareExchange64(*this, &access, access.sync(), mem, expect, replace, - output); -} - -void MacroAssembler::compareExchange64(const Synchronization& sync, - const Address& mem, Register64 expect, - Register64 replace, Register64 output) { - CompareExchange64(*this, nullptr, sync, mem, expect, replace, output); + WasmCompareExchange64(*this, access, mem, expect, replace, output); } template diff --git a/js/src/jit/arm64/AtomicOperations-arm64-gcc.h b/js/src/jit/arm64/AtomicOperations-arm64-gcc.h new file mode 100644 index 0000000000000..07b7901c3c3c8 --- /dev/null +++ b/js/src/jit/arm64/AtomicOperations-arm64-gcc.h @@ -0,0 +1,152 @@ +/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- + * vim: set ts=8 sts=2 et sw=2 tw=80: + * This Source Code Form is subject to the terms of the Mozilla Public + * License, v. 2.0. If a copy of the MPL was not distributed with this + * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ + +/* For documentation, see jit/AtomicOperations.h */ + +#ifndef jit_arm64_AtomicOperations_arm64_h +#define jit_arm64_AtomicOperations_arm64_h + +#include "mozilla/Assertions.h" +#include "mozilla/Types.h" + +#include "vm/ArrayBufferObject.h" + +#if !defined(__clang__) && !defined(__GNUC__) +# error "This file only for gcc-compatible compilers" +#endif + +inline bool js::jit::AtomicOperations::hasAtomic8() { return true; } + +inline bool js::jit::AtomicOperations::isLockfree8() { + MOZ_ASSERT(__atomic_always_lock_free(sizeof(int8_t), 0)); + MOZ_ASSERT(__atomic_always_lock_free(sizeof(int16_t), 0)); + MOZ_ASSERT(__atomic_always_lock_free(sizeof(int32_t), 0)); + MOZ_ASSERT(__atomic_always_lock_free(sizeof(int64_t), 0)); + return true; +} + +inline void js::jit::AtomicOperations::fenceSeqCst() { + __atomic_thread_fence(__ATOMIC_SEQ_CST); +} + +template +inline T js::jit::AtomicOperations::loadSeqCst(T* addr) { + MOZ_ASSERT(tier1Constraints(addr)); + T v; + __atomic_load(addr, &v, __ATOMIC_SEQ_CST); + return v; +} + +template +inline void js::jit::AtomicOperations::storeSeqCst(T* addr, T val) { + MOZ_ASSERT(tier1Constraints(addr)); + __atomic_store(addr, &val, __ATOMIC_SEQ_CST); +} + +template +inline T js::jit::AtomicOperations::exchangeSeqCst(T* addr, T val) { + MOZ_ASSERT(tier1Constraints(addr)); + T v; + __atomic_exchange(addr, &val, &v, __ATOMIC_SEQ_CST); + return v; +} + +template +inline T js::jit::AtomicOperations::compareExchangeSeqCst(T* addr, T oldval, + T newval) { + MOZ_ASSERT(tier1Constraints(addr)); + __atomic_compare_exchange(addr, &oldval, &newval, false, __ATOMIC_SEQ_CST, + __ATOMIC_SEQ_CST); + return oldval; +} + +template +inline T js::jit::AtomicOperations::fetchAddSeqCst(T* addr, T val) { + MOZ_ASSERT(tier1Constraints(addr)); + return __atomic_fetch_add(addr, val, __ATOMIC_SEQ_CST); +} + +template +inline T js::jit::AtomicOperations::fetchSubSeqCst(T* addr, T val) { + MOZ_ASSERT(tier1Constraints(addr)); + return __atomic_fetch_sub(addr, val, __ATOMIC_SEQ_CST); +} + +template +inline T js::jit::AtomicOperations::fetchAndSeqCst(T* addr, T val) { + MOZ_ASSERT(tier1Constraints(addr)); + return __atomic_fetch_and(addr, val, __ATOMIC_SEQ_CST); +} + +template +inline T js::jit::AtomicOperations::fetchOrSeqCst(T* addr, T val) { + MOZ_ASSERT(tier1Constraints(addr)); + return __atomic_fetch_or(addr, val, __ATOMIC_SEQ_CST); +} + +template +inline T js::jit::AtomicOperations::fetchXorSeqCst(T* addr, T val) { + MOZ_ASSERT(tier1Constraints(addr)); + return __atomic_fetch_xor(addr, val, __ATOMIC_SEQ_CST); +} + +template +inline T js::jit::AtomicOperations::loadSafeWhenRacy(T* addr) { + MOZ_ASSERT(tier1Constraints(addr)); + T v; + __atomic_load(addr, &v, __ATOMIC_RELAXED); + return v; +} + +namespace js { +namespace jit { + +// Clang requires a specialization for uint8_clamped. +template <> +inline js::uint8_clamped js::jit::AtomicOperations::loadSafeWhenRacy( + js::uint8_clamped* addr) { + uint8_t v; + __atomic_load(&addr->val, &v, __ATOMIC_RELAXED); + return js::uint8_clamped(v); +} + +} // namespace jit +} // namespace js + +template +inline void js::jit::AtomicOperations::storeSafeWhenRacy(T* addr, T val) { + MOZ_ASSERT(tier1Constraints(addr)); + __atomic_store(addr, &val, __ATOMIC_RELAXED); +} + +namespace js { +namespace jit { + +// Clang requires a specialization for uint8_clamped. +template <> +inline void js::jit::AtomicOperations::storeSafeWhenRacy( + js::uint8_clamped* addr, js::uint8_clamped val) { + __atomic_store(&addr->val, &val.val, __ATOMIC_RELAXED); +} + +} // namespace jit +} // namespace js + +inline void js::jit::AtomicOperations::memcpySafeWhenRacy(void* dest, + const void* src, + size_t nbytes) { + MOZ_ASSERT(!((char*)dest <= (char*)src && (char*)src < (char*)dest + nbytes)); + MOZ_ASSERT(!((char*)src <= (char*)dest && (char*)dest < (char*)src + nbytes)); + memcpy(dest, src, nbytes); +} + +inline void js::jit::AtomicOperations::memmoveSafeWhenRacy(void* dest, + const void* src, + size_t nbytes) { + memmove(dest, src, nbytes); +} + +#endif // jit_arm64_AtomicOperations_arm64_h diff --git a/js/src/jit/shared/AtomicOperations-feeling-lucky-msvc.h b/js/src/jit/arm64/AtomicOperations-arm64-msvc.h similarity index 89% rename from js/src/jit/shared/AtomicOperations-feeling-lucky-msvc.h rename to js/src/jit/arm64/AtomicOperations-arm64-msvc.h index 7ca961bbdf5ec..4a70d9867cf9d 100644 --- a/js/src/jit/shared/AtomicOperations-feeling-lucky-msvc.h +++ b/js/src/jit/arm64/AtomicOperations-arm64-msvc.h @@ -4,26 +4,22 @@ * License, v. 2.0. If a copy of the MPL was not distributed with this * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ -#ifndef jit_shared_AtomicOperations_feeling_lucky_msvc_h -#define jit_shared_AtomicOperations_feeling_lucky_msvc_h +#ifndef jit_shared_AtomicOperations_x86_shared_msvc_h +#define jit_shared_AtomicOperations_x86_shared_msvc_h #include "mozilla/Assertions.h" #include "mozilla/Types.h" -// Explicitly exclude tier-1 platforms. - -#if ((defined(__x86_64__) || defined(_M_X64)) && defined(JS_CODEGEN_X64)) || \ - ((defined(__i386__) || defined(_M_IX86)) && defined(JS_CODEGEN_X86)) || \ - (defined(__arm__) && defined(JS_CODEGEN_ARM)) || \ - ((defined(__aarch64__) || defined(_M_ARM64)) && defined(JS_CODEGEN_ARM64)) -# error "Do not use this code on a tier-1 platform when a JIT is available" -#endif - #if !defined(_MSC_VER) # error "This file only for Microsoft Visual C++" #endif -// For overall documentation, see jit/AtomicOperations.h. +// For overall documentation, see jit/AtomicOperations.h/ +// +// For general comments on lock-freedom, access-atomicity, and related matters +// on x86 and x64, notably for justification of the implementations of the +// 64-bit primitives on 32-bit systems, see the comment block in +// AtomicOperations-x86-shared-gcc.h. // Below, _ReadWriteBarrier is a compiler directive, preventing reordering of // instructions and reuse of memory values across it in the compiler, but having @@ -34,20 +30,13 @@ // 32-bit operations, and 64-bit operations on 64-bit systems) and otherwise // falls back on CMPXCHG8B for 64-bit operations on 32-bit systems. We could be // using those functions in many cases here (though not all). I have not done -// so because I don't yet know how far back those functions are supported. +// so because (a) I don't yet know how far back those functions are supported +// and (b) I expect we'll end up dropping into assembler here eventually so as +// to guarantee that the C++ compiler won't optimize the code. // Note, _InterlockedCompareExchange takes the *new* value as the second // argument and the *comparand* (expected old value) as the third argument. -inline bool js::jit::AtomicOperations::Initialize() { - // Nothing - return true; -} - -inline void js::jit::AtomicOperations::ShutDown() { - // Nothing -} - inline bool js::jit::AtomicOperations::hasAtomic8() { return true; } inline bool js::jit::AtomicOperations::isLockfree8() { @@ -64,19 +53,14 @@ inline bool js::jit::AtomicOperations::isLockfree8() { inline void js::jit::AtomicOperations::fenceSeqCst() { _ReadWriteBarrier(); -#if defined(_M_IX86) || defined(_M_X64) - _mm_mfence(); -#elif defined(_M_ARM64) // MemoryBarrier is defined in winnt.h, which we don't want to include here. // This expression is the expansion of MemoryBarrier. __dmb(_ARM64_BARRIER_SY); -#else -#error "Unknown hardware for MSVC" -#endif } template inline T js::jit::AtomicOperations::loadSeqCst(T* addr) { + MOZ_ASSERT(tier1Constraints(addr)); _ReadWriteBarrier(); T v = *addr; _ReadWriteBarrier(); @@ -90,6 +74,7 @@ namespace jit { # define MSC_LOADOP(T) \ template <> \ inline T AtomicOperations::loadSeqCst(T* addr) { \ + MOZ_ASSERT(tier1Constraints(addr)); \ _ReadWriteBarrier(); \ return (T)_InterlockedCompareExchange64((__int64 volatile*)addr, 0, 0); \ } @@ -105,6 +90,7 @@ MSC_LOADOP(uint64_t) template inline void js::jit::AtomicOperations::storeSeqCst(T* addr, T val) { + MOZ_ASSERT(tier1Constraints(addr)); _ReadWriteBarrier(); *addr = val; fenceSeqCst(); @@ -117,6 +103,7 @@ namespace jit { # define MSC_STOREOP(T) \ template <> \ inline void AtomicOperations::storeSeqCst(T* addr, T val) { \ + MOZ_ASSERT(tier1Constraints(addr)); \ _ReadWriteBarrier(); \ T oldval = *addr; \ for (;;) { \ @@ -140,6 +127,7 @@ MSC_STOREOP(uint64_t) #define MSC_EXCHANGEOP(T, U, xchgop) \ template <> \ inline T AtomicOperations::exchangeSeqCst(T* addr, T val) { \ + MOZ_ASSERT(tier1Constraints(addr)); \ return (T)xchgop((U volatile*)addr, (U)val); \ } @@ -147,6 +135,7 @@ MSC_STOREOP(uint64_t) # define MSC_EXCHANGEOP_CAS(T) \ template <> \ inline T AtomicOperations::exchangeSeqCst(T* addr, T val) { \ + MOZ_ASSERT(tier1Constraints(addr)); \ _ReadWriteBarrier(); \ T oldval = *addr; \ for (;;) { \ @@ -188,6 +177,7 @@ MSC_EXCHANGEOP(uint64_t, __int64, _InterlockedExchange64) template <> \ inline T AtomicOperations::compareExchangeSeqCst(T* addr, T oldval, \ T newval) { \ + MOZ_ASSERT(tier1Constraints(addr)); \ return (T)cmpxchg((U volatile*)addr, (U)newval, (U)oldval); \ } @@ -211,6 +201,7 @@ MSC_CAS(uint64_t, __int64, _InterlockedCompareExchange64) #define MSC_FETCHADDOP(T, U, xadd) \ template <> \ inline T AtomicOperations::fetchAddSeqCst(T* addr, T val) { \ + MOZ_ASSERT(tier1Constraints(addr)); \ return (T)xadd((U volatile*)addr, (U)val); \ } @@ -224,6 +215,7 @@ MSC_CAS(uint64_t, __int64, _InterlockedCompareExchange64) # define MSC_FETCHADDOP_CAS(T) \ template <> \ inline T AtomicOperations::fetchAddSeqCst(T* addr, T val) { \ + MOZ_ASSERT(tier1Constraints(addr)); \ _ReadWriteBarrier(); \ T oldval = *addr; \ for (;;) { \ @@ -275,6 +267,7 @@ MSC_FETCHSUBOP(uint64_t) #define MSC_FETCHBITOPX(T, U, name, op) \ template <> \ inline T AtomicOperations::name(T* addr, T val) { \ + MOZ_ASSERT(tier1Constraints(addr)); \ return (T)op((U volatile*)addr, (U)val); \ } @@ -290,6 +283,7 @@ MSC_FETCHSUBOP(uint64_t) # define MSC_FETCHBITOPX_CAS(T, name, OP) \ template <> \ inline T AtomicOperations::name(T* addr, T val) { \ + MOZ_ASSERT(tier1Constraints(addr)); \ _ReadWriteBarrier(); \ T oldval = *addr; \ for (;;) { \ @@ -344,6 +338,7 @@ MSC_FETCHBITOP(uint64_t, __int64, _InterlockedAnd64, _InterlockedOr64, template inline T js::jit::AtomicOperations::loadSafeWhenRacy(T* addr) { + MOZ_ASSERT(tier1Constraints(addr)); // This is also appropriate for double, int64, and uint64 on 32-bit // platforms since there are no guarantees of access-atomicity. return *addr; @@ -351,6 +346,7 @@ inline T js::jit::AtomicOperations::loadSafeWhenRacy(T* addr) { template inline void js::jit::AtomicOperations::storeSafeWhenRacy(T* addr, T val) { + MOZ_ASSERT(tier1Constraints(addr)); // This is also appropriate for double, int64, and uint64 on 32-bit // platforms since there are no guarantees of access-atomicity. *addr = val; @@ -370,4 +366,4 @@ inline void js::jit::AtomicOperations::memmoveSafeWhenRacy(void* dest, ::memmove(dest, src, nbytes); } -#endif // jit_shared_AtomicOperations_feeling_lucky_msvc_h +#endif // jit_shared_AtomicOperations_x86_shared_msvc_h diff --git a/js/src/jit/arm64/MacroAssembler-arm64.cpp b/js/src/jit/arm64/MacroAssembler-arm64.cpp index 295eb9a136662..330405583e81f 100644 --- a/js/src/jit/arm64/MacroAssembler-arm64.cpp +++ b/js/src/jit/arm64/MacroAssembler-arm64.cpp @@ -1604,8 +1604,6 @@ static void CompareExchange(MacroAssembler& masm, Register scratch2 = temps.AcquireX().asUnsized(); MemOperand ptr = ComputePointerForAtomic(masm, mem, scratch2); - MOZ_ASSERT(ptr.base().asUnsized() != output); - masm.memoryBarrierBefore(sync); Register scratch = temps.AcquireX().asUnsized(); @@ -1709,27 +1707,6 @@ void MacroAssembler::compareExchange(Scalar::Type type, output); } -void MacroAssembler::compareExchange64(const Synchronization& sync, - const Address& mem, Register64 expect, - Register64 replace, Register64 output) { - CompareExchange(*this, nullptr, Scalar::Int64, Width::_64, sync, mem, - expect.reg, replace.reg, output.reg); -} - -void MacroAssembler::atomicExchange64(const Synchronization& sync, - const Address& mem, Register64 value, - Register64 output) { - AtomicExchange(*this, nullptr, Scalar::Int64, Width::_64, sync, mem, - value.reg, output.reg); -} - -void MacroAssembler::atomicFetchOp64(const Synchronization& sync, AtomicOp op, - Register64 value, const Address& mem, - Register64 temp, Register64 output) { - AtomicFetchOp(*this, nullptr, Scalar::Int64, Width::_64, sync, op, mem, - value.reg, temp.reg, output.reg); -} - void MacroAssembler::wasmCompareExchange(const wasm::MemoryAccessDesc& access, const Address& mem, Register oldval, Register newval, Register output) { diff --git a/js/src/jit/mips-shared/AtomicOperations-mips-shared.h b/js/src/jit/mips-shared/AtomicOperations-mips-shared.h index 021f13164296d..7336532e69ae3 100644 --- a/js/src/jit/mips-shared/AtomicOperations-mips-shared.h +++ b/js/src/jit/mips-shared/AtomicOperations-mips-shared.h @@ -61,15 +61,6 @@ struct MOZ_RAII AddressGuard { } // namespace jit } // namespace js -inline bool js::jit::AtomicOperations::Initialize() { - // Nothing - return true; -} - -inline void js::jit::AtomicOperations::ShutDown() { - // Nothing -} - inline bool js::jit::AtomicOperations::hasAtomic8() { return true; } inline bool js::jit::AtomicOperations::isLockfree8() { diff --git a/js/src/jit/moz.build b/js/src/jit/moz.build index e05c5f543d03b..17b36f89ffbb3 100644 --- a/js/src/jit/moz.build +++ b/js/src/jit/moz.build @@ -112,7 +112,6 @@ if not CONFIG['ENABLE_ION']: elif CONFIG['JS_CODEGEN_X86'] or CONFIG['JS_CODEGEN_X64']: LOpcodesGenerated.inputs += ['x86-shared/LIR-x86-shared.h'] UNIFIED_SOURCES += [ - 'shared/AtomicOperations-shared-jit.cpp', 'x86-shared/Architecture-x86-shared.cpp', 'x86-shared/Assembler-x86-shared.cpp', 'x86-shared/AssemblerBuffer-x86-shared.cpp', @@ -155,7 +154,6 @@ elif CONFIG['JS_CODEGEN_ARM']: 'arm/MacroAssembler-arm.cpp', 'arm/MoveEmitter-arm.cpp', 'arm/Trampoline-arm.cpp', - 'shared/AtomicOperations-shared-jit.cpp', ] if CONFIG['JS_SIMULATOR_ARM']: UNIFIED_SOURCES += [ @@ -187,8 +185,7 @@ elif CONFIG['JS_CODEGEN_ARM64']: 'arm64/vixl/MozAssembler-vixl.cpp', 'arm64/vixl/MozCpu-vixl.cpp', 'arm64/vixl/MozInstructions-vixl.cpp', - 'arm64/vixl/Utils-vixl.cpp', - 'shared/AtomicOperations-shared-jit.cpp', + 'arm64/vixl/Utils-vixl.cpp' ] if CONFIG['JS_SIMULATOR_ARM64']: UNIFIED_SOURCES += [ diff --git a/js/src/jit/shared/AtomicOperations-feeling-lucky-gcc.h b/js/src/jit/none/AtomicOperations-feeling-lucky.h similarity index 84% rename from js/src/jit/shared/AtomicOperations-feeling-lucky-gcc.h rename to js/src/jit/none/AtomicOperations-feeling-lucky.h index 3270785f2a022..21243a1acefeb 100644 --- a/js/src/jit/shared/AtomicOperations-feeling-lucky-gcc.h +++ b/js/src/jit/none/AtomicOperations-feeling-lucky.h @@ -7,10 +7,10 @@ /* For documentation, see jit/AtomicOperations.h, both the comment block at the * beginning and the #ifdef nest near the end. * - * This is a common file for tier-3 platforms (including simulators for our - * tier-1 platforms) that are not providing hardware-specific implementations of - * the atomic operations. Please keep it reasonably platform-independent by - * adding #ifdefs at the beginning as much as possible, not throughout the file. + * This is a common file for tier-3 platforms that are not providing + * hardware-specific implementations of the atomic operations. Please keep it + * reasonably platform-independent by adding #ifdefs at the beginning as much as + * possible, not throughout the file. * * * !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! @@ -22,25 +22,12 @@ * frequently good enough for tier-3 platforms. */ -#ifndef jit_shared_AtomicOperations_feeling_lucky_gcc_h -#define jit_shared_AtomicOperations_feeling_lucky_gcc_h +#ifndef jit_none_AtomicOperations_feeling_lucky_h +#define jit_none_AtomicOperations_feeling_lucky_h #include "mozilla/Assertions.h" #include "mozilla/Types.h" -// Explicitly exclude tier-1 platforms. - -#if ((defined(__x86_64__) || defined(_M_X64)) && defined(JS_CODEGEN_X64)) || \ - ((defined(__i386__) || defined(_M_IX86)) && defined(JS_CODEGEN_X86)) || \ - (defined(__arm__) && defined(JS_CODEGEN_ARM)) || \ - ((defined(__aarch64__) || defined(_M_ARM64)) && defined(JS_CODEGEN_ARM64)) -# error "Do not use this code on a tier-1 platform when a JIT is available" -#endif - -#if !(defined(__clang__) || defined(__GNUC__)) -# error "This file only for gcc/Clang" -#endif - // 64-bit atomics are not required by the JS spec, and you can compile // SpiderMonkey without them. // @@ -87,15 +74,21 @@ # define GNUC_COMPATIBLE #endif -// The default implementation tactic for gcc/clang is to use the newer __atomic -// intrinsics added for use in C++11 . Where that isn't available, we -// use GCC's older __sync functions instead. +#ifdef __s390x__ +# define HAS_64BIT_ATOMICS +# define HAS_64BIT_LOCKFREE +# define GNUC_COMPATIBLE +#endif + +// The default implementation tactic for gcc/clang is to use the newer +// __atomic intrinsics added for use in C++11 . Where that +// isn't available, we use GCC's older __sync functions instead. // -// ATOMICS_IMPLEMENTED_WITH_SYNC_INTRINSICS is kept as a backward compatible -// option for older compilers: enable this to use GCC's old __sync functions -// instead of the newer __atomic functions. This will be required for GCC 4.6.x -// and earlier, and probably for Clang 3.1, should we need to use those -// versions. Firefox no longer supports compilers that old. +// ATOMICS_IMPLEMENTED_WITH_SYNC_INTRINSICS is kept as a backward +// compatible option for older compilers: enable this to use GCC's old +// __sync functions instead of the newer __atomic functions. This +// will be required for GCC 4.6.x and earlier, and probably for Clang +// 3.1, should we need to use those versions. //#define ATOMICS_IMPLEMENTED_WITH_SYNC_INTRINSICS @@ -107,17 +100,7 @@ // Try to avoid platform #ifdefs below this point. -inline bool js::jit::AtomicOperations::Initialize() { - // Nothing - return true; -} - -inline void js::jit::AtomicOperations::ShutDown() { - // Nothing -} - -// When compiling with Clang on 32-bit linux it will be necessary to link with -// -latomic to get the proper 64-bit intrinsics. +#ifdef GNUC_COMPATIBLE inline bool js::jit::AtomicOperations::hasAtomic8() { # if defined(HAS_64BIT_ATOMICS) @@ -205,41 +188,6 @@ inline void AtomicOperations::storeSeqCst(uint64_t* addr, uint64_t val) { } // namespace js # endif -template -inline T js::jit::AtomicOperations::exchangeSeqCst(T* addr, T val) { - static_assert(sizeof(T) <= 8, "atomics supported up to 8 bytes only"); -# ifdef ATOMICS_IMPLEMENTED_WITH_SYNC_INTRINSICS - T v; - __sync_synchronize(); - do { - v = *addr; - } while (__sync_val_compare_and_swap(addr, v, val) != v); - return v; -# else - T v; - __atomic_exchange(addr, &val, &v, __ATOMIC_SEQ_CST); - return v; -# endif -} - -# ifndef HAS_64BIT_ATOMICS -namespace js { -namespace jit { - -template <> -inline int64_t AtomicOperations::exchangeSeqCst(int64_t* addr, int64_t val) { - MOZ_CRASH("No 64-bit atomics"); -} - -template <> -inline uint64_t AtomicOperations::exchangeSeqCst(uint64_t* addr, uint64_t val) { - MOZ_CRASH("No 64-bit atomics"); -} - -} // namespace jit -} // namespace js -# endif - template inline T js::jit::AtomicOperations::compareExchangeSeqCst(T* addr, T oldval, T newval) { @@ -420,9 +368,6 @@ inline T js::jit::AtomicOperations::loadSafeWhenRacy(T* addr) { static_assert(sizeof(T) <= 8, "atomics supported up to 8 bytes only"); // This is actually roughly right even on 32-bit platforms since in that // case, double, int64, and uint64 loads need not be access-atomic. - // - // We could use __atomic_load, but it would be needlessly expensive on - // 32-bit platforms that could support it and just plain wrong on others. return *addr; } @@ -431,9 +376,6 @@ inline void js::jit::AtomicOperations::storeSafeWhenRacy(T* addr, T val) { static_assert(sizeof(T) <= 8, "atomics supported up to 8 bytes only"); // This is actually roughly right even on 32-bit platforms since in that // case, double, int64, and uint64 loads need not be access-atomic. - // - // We could use __atomic_store, but it would be needlessly expensive on - // 32-bit platforms that could support it and just plain wrong on others. *addr = val; } @@ -451,8 +393,50 @@ inline void js::jit::AtomicOperations::memmoveSafeWhenRacy(void* dest, ::memmove(dest, src, nbytes); } +template +inline T js::jit::AtomicOperations::exchangeSeqCst(T* addr, T val) { + static_assert(sizeof(T) <= 8, "atomics supported up to 8 bytes only"); +# ifdef ATOMICS_IMPLEMENTED_WITH_SYNC_INTRINSICS + T v; + __sync_synchronize(); + do { + v = *addr; + } while (__sync_val_compare_and_swap(addr, v, val) != v); + return v; +# else + T v; + __atomic_exchange(addr, &val, &v, __ATOMIC_SEQ_CST); + return v; +# endif +} + +# ifndef HAS_64BIT_ATOMICS +namespace js { +namespace jit { + +template <> +inline int64_t AtomicOperations::exchangeSeqCst(int64_t* addr, int64_t val) { + MOZ_CRASH("No 64-bit atomics"); +} + +template <> +inline uint64_t AtomicOperations::exchangeSeqCst(uint64_t* addr, uint64_t val) { + MOZ_CRASH("No 64-bit atomics"); +} + +} // namespace jit +} // namespace js +# endif + +#else + +# error "Either use GCC or Clang, or add code here" + +#endif + #undef ATOMICS_IMPLEMENTED_WITH_SYNC_INTRINSICS +#undef GNUC_COMPATIBLE #undef HAS_64BIT_ATOMICS #undef HAS_64BIT_LOCKFREE -#endif // jit_shared_AtomicOperations_feeling_lucky_gcc_h +#endif // jit_none_AtomicOperations_feeling_lucky_h diff --git a/js/src/jit/shared/AtomicOperations-feeling-lucky.h b/js/src/jit/shared/AtomicOperations-feeling-lucky.h deleted file mode 100644 index a399f271ae752..0000000000000 --- a/js/src/jit/shared/AtomicOperations-feeling-lucky.h +++ /dev/null @@ -1,19 +0,0 @@ -/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 4 -*- - * vim: set ts=8 sts=4 et sw=4 tw=99: - * This Source Code Form is subject to the terms of the Mozilla Public - * License, v. 2.0. If a copy of the MPL was not distributed with this - * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ - -#ifndef jit_shared_AtomicOperations_feeling_lucky_h -#define jit_shared_AtomicOperations_feeling_lucky_h - -#if defined(__clang__) || defined(__GNUC__) -# include "jit/shared/AtomicOperations-feeling-lucky-gcc.h" -#elif defined(_MSC_VER) -# include "jit/shared/AtomicOperations-feeling-lucky-msvc.h" -#else -# error "No AtomicOperations support for this platform+compiler combination" -#endif - -#endif // jit_shared_AtomicOperations_feeling_lucky_h - diff --git a/js/src/jit/shared/AtomicOperations-shared-jit.cpp b/js/src/jit/shared/AtomicOperations-shared-jit.cpp deleted file mode 100644 index fd0a1a109339d..0000000000000 --- a/js/src/jit/shared/AtomicOperations-shared-jit.cpp +++ /dev/null @@ -1,1018 +0,0 @@ -/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- - * vim: set ts=8 sts=4 et sw=4 tw=99: - * This Source Code Form is subject to the terms of the Mozilla Public - * License, v. 2.0. If a copy of the MPL was not distributed with this - * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ - -#include "mozilla/Atomics.h" - -#ifdef JS_CODEGEN_ARM -# include "jit/arm/Architecture-arm.h" -#endif -#include "jit/AtomicOperations.h" -#include "jit/IonTypes.h" -#include "jit/MacroAssembler.h" -#include "jit/RegisterSets.h" - -#include "jit/MacroAssembler-inl.h" - -using namespace js; -using namespace js::jit; - -// Assigned registers must follow these rules: -// -// - if they overlap the argument registers (for arguments we use) then they -// -// M M U U SSSS TTTTT -// ====\ MM MM U U S T /==== -// =====> M M M U U SSS T <===== -// ====/ M M U U S T \==== -// M M UUU SSSS T -// -// require no register movement, even for 64-bit registers. (If this becomes -// too complex to handle then we need to create an abstraction that uses the -// MoveResolver, see comments on bug 1394420.) -// -// - they should be volatile when possible so that we don't have to save and -// restore them. -// -// Note that the functions we're generating have a very limited number of -// signatures, and the register assignments need only work for these signatures. -// The signatures are these: -// -// () -// (ptr) -// (ptr, val/val64) -// (ptr, ptr) -// (ptr, val/val64, val/val64) -// -// It would be nice to avoid saving and restoring all the nonvolatile registers -// for all the operations, and instead save and restore only the registers used -// by each specific operation, but the amount of protocol needed to accomplish -// that probably does not pay for itself. - -#if defined(JS_CODEGEN_X64) - -// Selected registers match the argument registers exactly, and none of them -// overlap the result register. - -static const LiveRegisterSet AtomicNonVolatileRegs; - -static constexpr Register AtomicPtrReg = IntArgReg0; -static constexpr Register AtomicPtr2Reg = IntArgReg1; -static constexpr Register AtomicValReg = IntArgReg1; -static constexpr Register64 AtomicValReg64(IntArgReg1); -static constexpr Register AtomicVal2Reg = IntArgReg2; -static constexpr Register64 AtomicVal2Reg64(IntArgReg2); -static constexpr Register AtomicTemp = IntArgReg3; -static constexpr Register64 AtomicTemp64(IntArgReg3); - -#elif defined(JS_CODEGEN_ARM64) - -// Selected registers match the argument registers, except that the Ptr is not -// in IntArgReg0 so as not to conflict with the result register. - -static const LiveRegisterSet AtomicNonVolatileRegs; - -static constexpr Register AtomicPtrReg = IntArgReg4; -static constexpr Register AtomicPtr2Reg = IntArgReg1; -static constexpr Register AtomicValReg = IntArgReg1; -static constexpr Register64 AtomicValReg64(IntArgReg1); -static constexpr Register AtomicVal2Reg = IntArgReg2; -static constexpr Register64 AtomicVal2Reg64(IntArgReg2); -static constexpr Register AtomicTemp = IntArgReg3; -static constexpr Register64 AtomicTemp64(IntArgReg3); - -#elif defined(JS_CODEGEN_ARM) - -// Assigned registers except temp are disjoint from the argument registers, -// since accounting for both 32-bit and 64-bit arguments and constraints on the -// result register is much too messy. The temp is in an argument register since -// it won't be used until we've moved all arguments to other registers. - -static const LiveRegisterSet AtomicNonVolatileRegs = - LiveRegisterSet(GeneralRegisterSet((uint32_t(1) << Registers::r4) | - (uint32_t(1) << Registers::r5) | - (uint32_t(1) << Registers::r6) | - (uint32_t(1) << Registers::r7) | - (uint32_t(1) << Registers::r8)), - FloatRegisterSet(0)); - -static constexpr Register AtomicPtrReg = r8; -static constexpr Register AtomicPtr2Reg = r6; -static constexpr Register AtomicTemp = r3; -static constexpr Register AtomicValReg = r6; -static constexpr Register64 AtomicValReg64(r7, r6); -static constexpr Register AtomicVal2Reg = r4; -static constexpr Register64 AtomicVal2Reg64(r5, r4); - -#elif defined(JS_CODEGEN_X86) - -// There are no argument registers. - -static const LiveRegisterSet AtomicNonVolatileRegs = - LiveRegisterSet(GeneralRegisterSet((1 << X86Encoding::rbx) | - (1 << X86Encoding::rsi)), - FloatRegisterSet(0)); - -static constexpr Register AtomicPtrReg = esi; -static constexpr Register AtomicPtr2Reg = ebx; -static constexpr Register AtomicValReg = ebx; -static constexpr Register AtomicVal2Reg = ecx; -static constexpr Register AtomicTemp = edx; - -// 64-bit registers for cmpxchg8b. ValReg/Val2Reg/Temp are not used in this -// case. - -static constexpr Register64 AtomicValReg64(edx, eax); -static constexpr Register64 AtomicVal2Reg64(ecx, ebx); - -#else -# error "Not implemented - not a tier1 platform" -#endif - -// These are useful shorthands and hide the meaningless uint/int distinction. - -static constexpr Scalar::Type SIZE8 = Scalar::Uint8; -static constexpr Scalar::Type SIZE16 = Scalar::Uint16; -static constexpr Scalar::Type SIZE32 = Scalar::Uint32; -static constexpr Scalar::Type SIZE64 = Scalar::Int64; -#ifdef JS_64BIT -static constexpr Scalar::Type SIZEWORD = SIZE64; -#else -static constexpr Scalar::Type SIZEWORD = SIZE32; -#endif - -// A "block" is a sequence of bytes that is a reasonable quantum to copy to -// amortize call overhead when implementing memcpy and memmove. A block will -// not fit in registers on all platforms and copying it without using -// intermediate memory will therefore be sensitive to overlap. -// -// A "word" is an item that we can copy using only register intermediate storage -// on all platforms; words can be individually copied without worrying about -// overlap. -// -// Blocks and words can be aligned or unaligned; specific (generated) copying -// functions handle this in platform-specific ways. - -static constexpr size_t WORDSIZE = sizeof(uintptr_t); // Also see SIZEWORD above -static constexpr size_t BLOCKSIZE = 8 * WORDSIZE; // Must be a power of 2 - -static_assert(BLOCKSIZE % WORDSIZE == 0, "A block is an integral number of words"); - -static constexpr size_t WORDMASK = WORDSIZE - 1; -static constexpr size_t BLOCKMASK = BLOCKSIZE - 1; - -struct ArgIterator -{ - ABIArgGenerator abi; - unsigned argBase = 0; -}; - -static void GenGprArg(MacroAssembler& masm, MIRType t, ArgIterator* iter, - Register reg) { - MOZ_ASSERT(t == MIRType::Pointer || t == MIRType::Int32); - ABIArg arg = iter->abi.next(t); - switch (arg.kind()) { - case ABIArg::GPR: { - if (arg.gpr() != reg) { - masm.movePtr(arg.gpr(), reg); - } - break; - } - case ABIArg::Stack: { - Address src(masm.getStackPointer(), - iter->argBase + arg.offsetFromArgBase()); - masm.loadPtr(src, reg); - break; - } - default: { - MOZ_CRASH("Not possible"); - } - } -} - -static void GenGpr64Arg(MacroAssembler& masm, ArgIterator* iter, - Register64 reg) { - ABIArg arg = iter->abi.next(MIRType::Int64); - switch (arg.kind()) { - case ABIArg::GPR: { - if (arg.gpr64() != reg) { - masm.move64(arg.gpr64(), reg); - } - break; - } - case ABIArg::Stack: { - Address src(masm.getStackPointer(), - iter->argBase + arg.offsetFromArgBase()); -#ifdef JS_64BIT - masm.load64(src, reg); -#else - masm.load32(LowWord(src), reg.low); - masm.load32(HighWord(src), reg.high); -#endif - break; - } -#if defined(JS_CODEGEN_REGISTER_PAIR) - case ABIArg::GPR_PAIR: { - if (arg.gpr64() != reg) { - masm.move32(arg.oddGpr(), reg.high); - masm.move32(arg.evenGpr(), reg.low); - } - break; - } -#endif - default: { - MOZ_CRASH("Not possible"); - } - } -} - -static uint32_t GenPrologue(MacroAssembler& masm, ArgIterator* iter) { - masm.assumeUnreachable("Shouldn't get here"); - masm.flushBuffer(); - masm.haltingAlign(CodeAlignment); - masm.setFramePushed(0); - uint32_t start = masm.currentOffset(); - masm.PushRegsInMask(AtomicNonVolatileRegs); - iter->argBase = sizeof(void*) + masm.framePushed(); - return start; -} - -static void GenEpilogue(MacroAssembler& masm) { - masm.PopRegsInMask(AtomicNonVolatileRegs); - MOZ_ASSERT(masm.framePushed() == 0); -#if defined(JS_CODEGEN_ARM64) - masm.Ret(); -#elif defined(JS_CODEGEN_ARM) - masm.mov(lr, pc); -#else - masm.ret(); -#endif -} - -#ifndef JS_64BIT -static uint32_t GenNop(MacroAssembler& masm) { - ArgIterator iter; - uint32_t start = GenPrologue(masm, &iter); - GenEpilogue(masm); - return start; -} -#endif - -static uint32_t GenFenceSeqCst(MacroAssembler& masm) { - ArgIterator iter; - uint32_t start = GenPrologue(masm, &iter); - masm.memoryBarrier(MembarFull); - GenEpilogue(masm); - return start; -} - -static uint32_t GenLoad(MacroAssembler& masm, Scalar::Type size, - Synchronization sync) { - ArgIterator iter; - uint32_t start = GenPrologue(masm, &iter); - GenGprArg(masm, MIRType::Pointer, &iter, AtomicPtrReg); - - masm.memoryBarrier(sync.barrierBefore); - Address addr(AtomicPtrReg, 0); - switch (size) { - case SIZE8: - masm.load8ZeroExtend(addr, ReturnReg); - break; - case SIZE16: - masm.load16ZeroExtend(addr, ReturnReg); - break; - case SIZE32: - masm.load32(addr, ReturnReg); - break; - case SIZE64: -#if defined(JS_64BIT) - masm.load64(addr, ReturnReg64); - break; -#else - MOZ_CRASH("64-bit atomic load not available on this platform"); -#endif - default: - MOZ_CRASH("Unknown size"); - } - masm.memoryBarrier(sync.barrierAfter); - - GenEpilogue(masm); - return start; -} - -static uint32_t GenStore(MacroAssembler& masm, Scalar::Type size, - Synchronization sync) { - ArgIterator iter; - uint32_t start = GenPrologue(masm, &iter); - GenGprArg(masm, MIRType::Pointer, &iter, AtomicPtrReg); - - masm.memoryBarrier(sync.barrierBefore); - Address addr(AtomicPtrReg, 0); - switch (size) { - case SIZE8: - GenGprArg(masm, MIRType::Int32, &iter, AtomicValReg); - masm.store8(AtomicValReg, addr); - break; - case SIZE16: - GenGprArg(masm, MIRType::Int32, &iter, AtomicValReg); - masm.store16(AtomicValReg, addr); - break; - case SIZE32: - GenGprArg(masm, MIRType::Int32, &iter, AtomicValReg); - masm.store32(AtomicValReg, addr); - break; - case SIZE64: -#if defined(JS_64BIT) - GenGpr64Arg(masm, &iter, AtomicValReg64); - masm.store64(AtomicValReg64, addr); - break; -#else - MOZ_CRASH("64-bit atomic store not available on this platform"); -#endif - default: - MOZ_CRASH("Unknown size"); - } - masm.memoryBarrier(sync.barrierAfter); - - GenEpilogue(masm); - return start; -} - -enum class CopyDir { - DOWN, // Move data down, ie, iterate toward higher addresses - UP // The other way -}; - -static uint32_t GenCopy(MacroAssembler& masm, Scalar::Type size, - uint32_t unroll, CopyDir direction) { - ArgIterator iter; - uint32_t start = GenPrologue(masm, &iter); - - Register dest = AtomicPtrReg; - Register src = AtomicPtr2Reg; - - GenGprArg(masm, MIRType::Pointer, &iter, dest); - GenGprArg(masm, MIRType::Pointer, &iter, src); - - uint32_t offset = direction == CopyDir::DOWN ? 0 : unroll-1; - for (uint32_t i = 0; i < unroll; i++) { - switch (size) { - case SIZE8: - masm.load8ZeroExtend(Address(src, offset), AtomicTemp); - masm.store8(AtomicTemp, Address(dest, offset)); - break; - case SIZE16: - masm.load16ZeroExtend(Address(src, offset*2), AtomicTemp); - masm.store16(AtomicTemp, Address(dest, offset*2)); - break; - case SIZE32: - masm.load32(Address(src, offset*4), AtomicTemp); - masm.store32(AtomicTemp, Address(dest, offset*4)); - break; - case SIZE64: -#if defined(JS_64BIT) - masm.load64(Address(src, offset*8), AtomicTemp64); - masm.store64(AtomicTemp64, Address(dest, offset*8)); - break; -#else - MOZ_CRASH("64-bit atomic load/store not available on this platform"); -#endif - default: - MOZ_CRASH("Unknown size"); - } - offset += direction == CopyDir::DOWN ? 1 : -1; - } - - GenEpilogue(masm); - return start; -} - -static uint32_t GenCmpxchg(MacroAssembler& masm, Scalar::Type size, - Synchronization sync) { - ArgIterator iter; - uint32_t start = GenPrologue(masm, &iter); - GenGprArg(masm, MIRType::Pointer, &iter, AtomicPtrReg); - - Address addr(AtomicPtrReg, 0); - switch (size) { - case SIZE8: - case SIZE16: - case SIZE32: - GenGprArg(masm, MIRType::Int32, &iter, AtomicValReg); - GenGprArg(masm, MIRType::Int32, &iter, AtomicVal2Reg); - masm.compareExchange(size, sync, addr, AtomicValReg, AtomicVal2Reg, ReturnReg); - break; - case SIZE64: - GenGpr64Arg(masm, &iter, AtomicValReg64); - GenGpr64Arg(masm, &iter, AtomicVal2Reg64); -#if defined(JS_CODEGEN_X86) - MOZ_ASSERT(AtomicValReg64 == Register64(edx, eax)); - MOZ_ASSERT(AtomicVal2Reg64 == Register64(ecx, ebx)); - masm.lock_cmpxchg8b(edx, eax, ecx, ebx, Operand(addr)); - - MOZ_ASSERT(ReturnReg64 == Register64(edi, eax)); - masm.mov(edx, edi); -#else - masm.compareExchange64(sync, addr, AtomicValReg64, AtomicVal2Reg64, ReturnReg64); -#endif - break; - default: - MOZ_CRASH("Unknown size"); - } - - GenEpilogue(masm); - return start; -} - -static uint32_t GenExchange(MacroAssembler& masm, Scalar::Type size, - Synchronization sync) { - ArgIterator iter; - uint32_t start = GenPrologue(masm, &iter); - GenGprArg(masm, MIRType::Pointer, &iter, AtomicPtrReg); - - Address addr(AtomicPtrReg, 0); - switch (size) { - case SIZE8: - case SIZE16: - case SIZE32: - GenGprArg(masm, MIRType::Int32, &iter, AtomicValReg); - masm.atomicExchange(size, sync, addr, AtomicValReg, ReturnReg); - break; - case SIZE64: -#if defined(JS_64BIT) - GenGpr64Arg(masm, &iter, AtomicValReg64); - masm.atomicExchange64(sync, addr, AtomicValReg64, ReturnReg64); - break; -#else - MOZ_CRASH("64-bit atomic exchange not available on this platform"); -#endif - default: - MOZ_CRASH("Unknown size"); - } - - GenEpilogue(masm); - return start; -} - -static uint32_t -GenFetchOp(MacroAssembler& masm, Scalar::Type size, AtomicOp op, - Synchronization sync) { - ArgIterator iter; - uint32_t start = GenPrologue(masm, &iter); - GenGprArg(masm, MIRType::Pointer, &iter, AtomicPtrReg); - - Address addr(AtomicPtrReg, 0); - switch (size) { - case SIZE8: - case SIZE16: - case SIZE32: { -#if defined(JS_CODEGEN_X86) || defined(JS_CODEGEN_X64) - Register tmp = op == AtomicFetchAddOp || op == AtomicFetchSubOp - ? Register::Invalid() - : AtomicTemp; -#else - Register tmp = AtomicTemp; -#endif - GenGprArg(masm, MIRType::Int32, &iter, AtomicValReg); - masm.atomicFetchOp(size, sync, op, AtomicValReg, addr, tmp, ReturnReg); - break; - } - case SIZE64: { -#if defined(JS_64BIT) -# if defined(JS_CODEGEN_X64) - Register64 tmp = op == AtomicFetchAddOp || op == AtomicFetchSubOp - ? Register64::Invalid() - : AtomicTemp64; -# else - Register64 tmp = AtomicTemp64; -# endif - GenGpr64Arg(masm, &iter, AtomicValReg64); - masm.atomicFetchOp64(sync, op, AtomicValReg64, addr, tmp, ReturnReg64); - break; -#else - MOZ_CRASH("64-bit atomic fetchOp not available on this platform"); -#endif - } - default: - MOZ_CRASH("Unknown size"); - } - - GenEpilogue(masm); - return start; -} - -namespace js { -namespace jit { - -void (*AtomicFenceSeqCst)(); - -#ifndef JS_64BIT -void (*AtomicCompilerFence)(); -#endif - -uint8_t (*AtomicLoad8SeqCst)(const uint8_t* addr); -uint16_t (*AtomicLoad16SeqCst)(const uint16_t* addr); -uint32_t (*AtomicLoad32SeqCst)(const uint32_t* addr); -#ifdef JS_64BIT -uint64_t (*AtomicLoad64SeqCst)(const uint64_t* addr); -#endif - -uint8_t (*AtomicLoad8Unsynchronized)(const uint8_t* addr); -uint16_t (*AtomicLoad16Unsynchronized)(const uint16_t* addr); -uint32_t (*AtomicLoad32Unsynchronized)(const uint32_t* addr); -#ifdef JS_64BIT -uint64_t (*AtomicLoad64Unsynchronized)(const uint64_t* addr); -#endif - -uint8_t (*AtomicStore8SeqCst)(uint8_t* addr, uint8_t val); -uint16_t (*AtomicStore16SeqCst)(uint16_t* addr, uint16_t val); -uint32_t (*AtomicStore32SeqCst)(uint32_t* addr, uint32_t val); -#ifdef JS_64BIT -uint64_t (*AtomicStore64SeqCst)(uint64_t* addr, uint64_t val); -#endif - -uint8_t (*AtomicStore8Unsynchronized)(uint8_t* addr, uint8_t val); -uint16_t (*AtomicStore16Unsynchronized)(uint16_t* addr, uint16_t val); -uint32_t (*AtomicStore32Unsynchronized)(uint32_t* addr, uint32_t val); -#ifdef JS_64BIT -uint64_t (*AtomicStore64Unsynchronized)(uint64_t* addr, uint64_t val); -#endif - -// See the definitions of BLOCKSIZE and WORDSIZE earlier. The "unaligned" -// functions perform individual byte copies (and must always be "down" or "up"). -// The others ignore alignment issues, and thus either depend on unaligned -// accesses being OK or not being invoked on unaligned addresses. -// -// src and dest point to the lower addresses of the respective data areas -// irrespective of "up" or "down". - -static void (*AtomicCopyUnalignedBlockDownUnsynchronized)(uint8_t* dest, const uint8_t* src); -static void (*AtomicCopyUnalignedBlockUpUnsynchronized)(uint8_t* dest, const uint8_t* src); -static void (*AtomicCopyUnalignedWordDownUnsynchronized)(uint8_t* dest, const uint8_t* src); -static void (*AtomicCopyUnalignedWordUpUnsynchronized)(uint8_t* dest, const uint8_t* src); - -static void (*AtomicCopyBlockDownUnsynchronized)(uint8_t* dest, const uint8_t* src); -static void (*AtomicCopyBlockUpUnsynchronized)(uint8_t* dest, const uint8_t* src); -static void (*AtomicCopyWordUnsynchronized)(uint8_t* dest, const uint8_t* src); -static void (*AtomicCopyByteUnsynchronized)(uint8_t* dest, const uint8_t* src); - -uint8_t (*AtomicCmpXchg8SeqCst)(uint8_t* addr, uint8_t oldval, uint8_t newval); -uint16_t (*AtomicCmpXchg16SeqCst)(uint16_t* addr, uint16_t oldval, uint16_t newval); -uint32_t (*AtomicCmpXchg32SeqCst)(uint32_t* addr, uint32_t oldval, uint32_t newval); -uint64_t (*AtomicCmpXchg64SeqCst)(uint64_t* addr, uint64_t oldval, uint64_t newval); - -uint8_t (*AtomicExchange8SeqCst)(uint8_t* addr, uint8_t val); -uint16_t (*AtomicExchange16SeqCst)(uint16_t* addr, uint16_t val); -uint32_t (*AtomicExchange32SeqCst)(uint32_t* addr, uint32_t val); -#ifdef JS_64BIT -uint64_t (*AtomicExchange64SeqCst)(uint64_t* addr, uint64_t val); -#endif - -uint8_t (*AtomicAdd8SeqCst)(uint8_t* addr, uint8_t val); -uint16_t (*AtomicAdd16SeqCst)(uint16_t* addr, uint16_t val); -uint32_t (*AtomicAdd32SeqCst)(uint32_t* addr, uint32_t val); -#ifdef JS_64BIT -uint64_t (*AtomicAdd64SeqCst)(uint64_t* addr, uint64_t val); -#endif - -uint8_t (*AtomicAnd8SeqCst)(uint8_t* addr, uint8_t val); -uint16_t (*AtomicAnd16SeqCst)(uint16_t* addr, uint16_t val); -uint32_t (*AtomicAnd32SeqCst)(uint32_t* addr, uint32_t val); -#ifdef JS_64BIT -uint64_t (*AtomicAnd64SeqCst)(uint64_t* addr, uint64_t val); -#endif - -uint8_t (*AtomicOr8SeqCst)(uint8_t* addr, uint8_t val); -uint16_t (*AtomicOr16SeqCst)(uint16_t* addr, uint16_t val); -uint32_t (*AtomicOr32SeqCst)(uint32_t* addr, uint32_t val); -#ifdef JS_64BIT -uint64_t (*AtomicOr64SeqCst)(uint64_t* addr, uint64_t val); -#endif - -uint8_t (*AtomicXor8SeqCst)(uint8_t* addr, uint8_t val); -uint16_t (*AtomicXor16SeqCst)(uint16_t* addr, uint16_t val); -uint32_t (*AtomicXor32SeqCst)(uint32_t* addr, uint32_t val); -#ifdef JS_64BIT -uint64_t (*AtomicXor64SeqCst)(uint64_t* addr, uint64_t val); -#endif - -static bool UnalignedAccessesAreOK() { -#ifdef DEBUG - const char* flag = getenv("JS_NO_UNALIGNED_MEMCPY"); - if (flag && *flag == '1') - return false; -#endif -#if defined(JS_CODEGEN_X86) || defined(JS_CODEGEN_X64) - return true; -#elif defined(JS_CODEGEN_ARM) - return !HasAlignmentFault(); -#elif defined(JS_CODEGEN_ARM64) - // This is not necessarily true but it's the best guess right now. - return true; -#else - return false; -#endif -} - -void AtomicMemcpyDownUnsynchronized(uint8_t* dest, const uint8_t* src, - size_t nbytes) { - const uint8_t* lim = src + nbytes; - - // Set up bulk copying. The cases are ordered the way they are on the - // assumption that if we can achieve aligned copies even with a little - // preprocessing then that is better than unaligned copying on a platform - // that supports it. - - if (nbytes >= WORDSIZE) { - void (*copyBlock)(uint8_t* dest, const uint8_t* src); - void (*copyWord)(uint8_t* dest, const uint8_t* src); - - if (((uintptr_t(dest) ^ uintptr_t(src)) & WORDMASK) == 0) { - const uint8_t* cutoff = (const uint8_t*)JS_ROUNDUP(uintptr_t(src), - WORDSIZE); - MOZ_ASSERT(cutoff <= lim); // because nbytes >= WORDSIZE - while (src < cutoff) { - AtomicCopyByteUnsynchronized(dest++, src++); - } - copyBlock = AtomicCopyBlockDownUnsynchronized; - copyWord = AtomicCopyWordUnsynchronized; - } - else if (UnalignedAccessesAreOK()) { - copyBlock = AtomicCopyBlockDownUnsynchronized; - copyWord = AtomicCopyWordUnsynchronized; - } else { - copyBlock = AtomicCopyUnalignedBlockDownUnsynchronized; - copyWord = AtomicCopyUnalignedWordDownUnsynchronized; - } - - // Bulk copy, first larger blocks and then individual words. - - const uint8_t* blocklim = src + ((lim - src) & ~BLOCKMASK); - while (src < blocklim) { - copyBlock(dest, src); - dest += BLOCKSIZE; - src += BLOCKSIZE; - } - - const uint8_t* wordlim = src + ((lim - src) & ~WORDMASK); - while (src < wordlim) { - copyWord(dest, src); - dest += WORDSIZE; - src += WORDSIZE; - } - } - - // Byte copy any remaining tail. - - while (src < lim) { - AtomicCopyByteUnsynchronized(dest++, src++); - } -} - -void AtomicMemcpyUpUnsynchronized(uint8_t* dest, const uint8_t* src, - size_t nbytes) { - const uint8_t* lim = src; - - src += nbytes; - dest += nbytes; - - if (nbytes >= WORDSIZE) { - void (*copyBlock)(uint8_t* dest, const uint8_t* src); - void (*copyWord)(uint8_t* dest, const uint8_t* src); - - if (((uintptr_t(dest) ^ uintptr_t(src)) & WORDMASK) == 0) { - const uint8_t* cutoff = (const uint8_t*)(uintptr_t(src) & ~WORDMASK); - MOZ_ASSERT(cutoff >= lim); // Because nbytes >= WORDSIZE - while (src > cutoff) { - AtomicCopyByteUnsynchronized(--dest, --src); - } - copyBlock = AtomicCopyBlockUpUnsynchronized; - copyWord = AtomicCopyWordUnsynchronized; - } - else if (UnalignedAccessesAreOK()) { - copyBlock = AtomicCopyBlockUpUnsynchronized; - copyWord = AtomicCopyWordUnsynchronized; - } else { - copyBlock = AtomicCopyUnalignedBlockUpUnsynchronized; - copyWord = AtomicCopyUnalignedWordUpUnsynchronized; - } - - const uint8_t* blocklim = src - ((src - lim) & ~BLOCKMASK); - while (src > blocklim) { - dest -= BLOCKSIZE; - src -= BLOCKSIZE; - copyBlock(dest, src); - } - - const uint8_t* wordlim = src - ((src - lim) & ~WORDMASK); - while (src > wordlim) { - dest -= WORDSIZE; - src -= WORDSIZE; - copyWord(dest, src); - } - } - - while (src > lim) { - AtomicCopyByteUnsynchronized(--dest, --src); - } -} - -// These will be read and written only by the main thread during startup and -// shutdown. - -static uint8_t* codeSegment; -static uint32_t codeSegmentSize; - -bool InitializeJittedAtomics() { - // We should only initialize once. - MOZ_ASSERT(!codeSegment); - - LifoAlloc lifo(4096); - TempAllocator alloc(&lifo); - JitContext jcx(&alloc); - StackMacroAssembler masm; - - uint32_t fenceSeqCst = GenFenceSeqCst(masm); - -#ifndef JS_64BIT - uint32_t nop = GenNop(masm); -#endif - - Synchronization Full = Synchronization::Full(); - Synchronization None = Synchronization::None(); - - uint32_t load8SeqCst = GenLoad(masm, SIZE8, Full); - uint32_t load16SeqCst = GenLoad(masm, SIZE16, Full); - uint32_t load32SeqCst = GenLoad(masm, SIZE32, Full); -#ifdef JS_64BIT - uint32_t load64SeqCst = GenLoad(masm, SIZE64, Full); -#endif - - uint32_t load8Unsynchronized = GenLoad(masm, SIZE8, None); - uint32_t load16Unsynchronized = GenLoad(masm, SIZE16, None); - uint32_t load32Unsynchronized = GenLoad(masm, SIZE32, None); -#ifdef JS_64BIT - uint32_t load64Unsynchronized = GenLoad(masm, SIZE64, None); -#endif - - uint32_t store8SeqCst = GenStore(masm, SIZE8, Full); - uint32_t store16SeqCst = GenStore(masm, SIZE16, Full); - uint32_t store32SeqCst = GenStore(masm, SIZE32, Full); -#ifdef JS_64BIT - uint32_t store64SeqCst = GenStore(masm, SIZE64, Full); -#endif - - uint32_t store8Unsynchronized = GenStore(masm, SIZE8, None); - uint32_t store16Unsynchronized = GenStore(masm, SIZE16, None); - uint32_t store32Unsynchronized = GenStore(masm, SIZE32, None); -#ifdef JS_64BIT - uint32_t store64Unsynchronized = GenStore(masm, SIZE64, None); -#endif - - uint32_t copyUnalignedBlockDownUnsynchronized = - GenCopy(masm, SIZE8, BLOCKSIZE, CopyDir::DOWN); - uint32_t copyUnalignedBlockUpUnsynchronized = - GenCopy(masm, SIZE8, BLOCKSIZE, CopyDir::UP); - uint32_t copyUnalignedWordDownUnsynchronized = - GenCopy(masm, SIZE8, WORDSIZE, CopyDir::DOWN); - uint32_t copyUnalignedWordUpUnsynchronized = - GenCopy(masm, SIZE8, WORDSIZE, CopyDir::UP); - - uint32_t copyBlockDownUnsynchronized = - GenCopy(masm, SIZEWORD, BLOCKSIZE/WORDSIZE, CopyDir::DOWN); - uint32_t copyBlockUpUnsynchronized = - GenCopy(masm, SIZEWORD, BLOCKSIZE/WORDSIZE, CopyDir::UP); - uint32_t copyWordUnsynchronized = GenCopy(masm, SIZEWORD, 1, CopyDir::DOWN); - uint32_t copyByteUnsynchronized = GenCopy(masm, SIZE8, 1, CopyDir::DOWN); - - uint32_t cmpxchg8SeqCst = GenCmpxchg(masm, SIZE8, Full); - uint32_t cmpxchg16SeqCst = GenCmpxchg(masm, SIZE16, Full); - uint32_t cmpxchg32SeqCst = GenCmpxchg(masm, SIZE32, Full); - uint32_t cmpxchg64SeqCst = GenCmpxchg(masm, SIZE64, Full); - - uint32_t exchange8SeqCst = GenExchange(masm, SIZE8, Full); - uint32_t exchange16SeqCst = GenExchange(masm, SIZE16, Full); - uint32_t exchange32SeqCst = GenExchange(masm, SIZE32, Full); -#ifdef JS_64BIT - uint32_t exchange64SeqCst = GenExchange(masm, SIZE64, Full); -#endif - - uint32_t add8SeqCst = GenFetchOp(masm, SIZE8, AtomicFetchAddOp, Full); - uint32_t add16SeqCst = GenFetchOp(masm, SIZE16, AtomicFetchAddOp, Full); - uint32_t add32SeqCst = GenFetchOp(masm, SIZE32, AtomicFetchAddOp, Full); -#ifdef JS_64BIT - uint32_t add64SeqCst = GenFetchOp(masm, SIZE64, AtomicFetchAddOp, Full); -#endif - - uint32_t and8SeqCst = GenFetchOp(masm, SIZE8, AtomicFetchAndOp, Full); - uint32_t and16SeqCst = GenFetchOp(masm, SIZE16, AtomicFetchAndOp, Full); - uint32_t and32SeqCst = GenFetchOp(masm, SIZE32, AtomicFetchAndOp, Full); -#ifdef JS_64BIT - uint32_t and64SeqCst = GenFetchOp(masm, SIZE64, AtomicFetchAndOp, Full); -#endif - - uint32_t or8SeqCst = GenFetchOp(masm, SIZE8, AtomicFetchOrOp, Full); - uint32_t or16SeqCst = GenFetchOp(masm, SIZE16, AtomicFetchOrOp, Full); - uint32_t or32SeqCst = GenFetchOp(masm, SIZE32, AtomicFetchOrOp, Full); -#ifdef JS_64BIT - uint32_t or64SeqCst = GenFetchOp(masm, SIZE64, AtomicFetchOrOp, Full); -#endif - - uint32_t xor8SeqCst = GenFetchOp(masm, SIZE8, AtomicFetchXorOp, Full); - uint32_t xor16SeqCst = GenFetchOp(masm, SIZE16, AtomicFetchXorOp, Full); - uint32_t xor32SeqCst = GenFetchOp(masm, SIZE32, AtomicFetchXorOp, Full); -#ifdef JS_64BIT - uint32_t xor64SeqCst = GenFetchOp(masm, SIZE64, AtomicFetchXorOp, Full); -#endif - - masm.finish(); - if (masm.oom()) { - return false; - } - - // Allocate executable memory. - uint32_t codeLength = masm.bytesNeeded(); - size_t roundedCodeLength = JS_ROUNDUP(codeLength, ExecutableCodePageSize); - uint8_t* code = - (uint8_t*)AllocateExecutableMemory(roundedCodeLength, - ProtectionSetting::Writable, - MemCheckKind::MakeUndefined); - if (!code) { - return false; - } - - // Zero the padding. - memset(code + codeLength, 0, roundedCodeLength - codeLength); - - // Copy the code into place but do not flush, as the flush path requires a - // JSContext* we do not have. - masm.executableCopy(code, /* flushICache = */ false); - - // Flush the icache using a primitive method. - ExecutableAllocator::cacheFlush(code, roundedCodeLength); - - // Reprotect the whole region to avoid having separate RW and RX mappings. - if (!ExecutableAllocator::makeExecutable(code, roundedCodeLength)) { - DeallocateExecutableMemory(code, roundedCodeLength); - return false; - } - - // Create the function pointers. - - AtomicFenceSeqCst = (void(*)())(code + fenceSeqCst); - -#ifndef JS_64BIT - AtomicCompilerFence = (void(*)())(code + nop); -#endif - - AtomicLoad8SeqCst = (uint8_t(*)(const uint8_t* addr))(code + load8SeqCst); - AtomicLoad16SeqCst = (uint16_t(*)(const uint16_t* addr))(code + load16SeqCst); - AtomicLoad32SeqCst = (uint32_t(*)(const uint32_t* addr))(code + load32SeqCst); -#ifdef JS_64BIT - AtomicLoad64SeqCst = (uint64_t(*)(const uint64_t* addr))(code + load64SeqCst); -#endif - - AtomicLoad8Unsynchronized = - (uint8_t(*)(const uint8_t* addr))(code + load8Unsynchronized); - AtomicLoad16Unsynchronized = - (uint16_t(*)(const uint16_t* addr))(code + load16Unsynchronized); - AtomicLoad32Unsynchronized = - (uint32_t(*)(const uint32_t* addr))(code + load32Unsynchronized); -#ifdef JS_64BIT - AtomicLoad64Unsynchronized = - (uint64_t(*)(const uint64_t* addr))(code + load64Unsynchronized); -#endif - - AtomicStore8SeqCst = - (uint8_t(*)(uint8_t* addr, uint8_t val))(code + store8SeqCst); - AtomicStore16SeqCst = - (uint16_t(*)(uint16_t* addr, uint16_t val))(code + store16SeqCst); - AtomicStore32SeqCst = - (uint32_t(*)(uint32_t* addr, uint32_t val))(code + store32SeqCst); -#ifdef JS_64BIT - AtomicStore64SeqCst = - (uint64_t(*)(uint64_t* addr, uint64_t val))(code + store64SeqCst); -#endif - - AtomicStore8Unsynchronized = - (uint8_t(*)(uint8_t* addr, uint8_t val))(code + store8Unsynchronized); - AtomicStore16Unsynchronized = - (uint16_t(*)(uint16_t* addr, uint16_t val))(code + store16Unsynchronized); - AtomicStore32Unsynchronized = - (uint32_t(*)(uint32_t* addr, uint32_t val))(code + store32Unsynchronized); -#ifdef JS_64BIT - AtomicStore64Unsynchronized = - (uint64_t(*)(uint64_t* addr, uint64_t val))(code + store64Unsynchronized); -#endif - - AtomicCopyUnalignedBlockDownUnsynchronized = - (void(*)(uint8_t* dest, const uint8_t* src))( - code + copyUnalignedBlockDownUnsynchronized); - AtomicCopyUnalignedBlockUpUnsynchronized = - (void(*)(uint8_t* dest, const uint8_t* src))( - code + copyUnalignedBlockUpUnsynchronized); - AtomicCopyUnalignedWordDownUnsynchronized = - (void(*)(uint8_t* dest, const uint8_t* src))( - code + copyUnalignedWordDownUnsynchronized); - AtomicCopyUnalignedWordUpUnsynchronized = - (void(*)(uint8_t* dest, const uint8_t* src))( - code + copyUnalignedWordUpUnsynchronized); - - AtomicCopyBlockDownUnsynchronized = - (void(*)(uint8_t* dest, const uint8_t* src))( - code + copyBlockDownUnsynchronized); - AtomicCopyBlockUpUnsynchronized = - (void(*)(uint8_t* dest, const uint8_t* src))( - code + copyBlockUpUnsynchronized); - AtomicCopyWordUnsynchronized = - (void(*)(uint8_t* dest, const uint8_t* src))(code + copyWordUnsynchronized); - AtomicCopyByteUnsynchronized = - (void(*)(uint8_t* dest, const uint8_t* src))(code + copyByteUnsynchronized); - - AtomicCmpXchg8SeqCst = - (uint8_t(*)(uint8_t* addr, uint8_t oldval, uint8_t newval))( - code + cmpxchg8SeqCst); - AtomicCmpXchg16SeqCst = - (uint16_t(*)(uint16_t* addr, uint16_t oldval, uint16_t newval))( - code + cmpxchg16SeqCst); - AtomicCmpXchg32SeqCst = - (uint32_t(*)(uint32_t* addr, uint32_t oldval, uint32_t newval))( - code + cmpxchg32SeqCst); - AtomicCmpXchg64SeqCst = - (uint64_t(*)(uint64_t* addr, uint64_t oldval, uint64_t newval))( - code + cmpxchg64SeqCst); - - AtomicExchange8SeqCst = (uint8_t(*)(uint8_t* addr, uint8_t val))( - code + exchange8SeqCst); - AtomicExchange16SeqCst = (uint16_t(*)(uint16_t* addr, uint16_t val))( - code + exchange16SeqCst); - AtomicExchange32SeqCst = (uint32_t(*)(uint32_t* addr, uint32_t val))( - code + exchange32SeqCst); -#ifdef JS_64BIT - AtomicExchange64SeqCst = (uint64_t(*)(uint64_t* addr, uint64_t val))( - code + exchange64SeqCst); -#endif - - AtomicAdd8SeqCst = - (uint8_t(*)(uint8_t* addr, uint8_t val))(code + add8SeqCst); - AtomicAdd16SeqCst = - (uint16_t(*)(uint16_t* addr, uint16_t val))(code + add16SeqCst); - AtomicAdd32SeqCst = - (uint32_t(*)(uint32_t* addr, uint32_t val))(code + add32SeqCst); -#ifdef JS_64BIT - AtomicAdd64SeqCst = - (uint64_t(*)(uint64_t* addr, uint64_t val))(code + add64SeqCst); -#endif - - AtomicAnd8SeqCst = - (uint8_t(*)(uint8_t* addr, uint8_t val))(code + and8SeqCst); - AtomicAnd16SeqCst = - (uint16_t(*)(uint16_t* addr, uint16_t val))(code + and16SeqCst); - AtomicAnd32SeqCst = - (uint32_t(*)(uint32_t* addr, uint32_t val))(code + and32SeqCst); -#ifdef JS_64BIT - AtomicAnd64SeqCst = - (uint64_t(*)(uint64_t* addr, uint64_t val))(code + and64SeqCst); -#endif - - AtomicOr8SeqCst = - (uint8_t(*)(uint8_t* addr, uint8_t val))(code + or8SeqCst); - AtomicOr16SeqCst = - (uint16_t(*)(uint16_t* addr, uint16_t val))(code + or16SeqCst); - AtomicOr32SeqCst = - (uint32_t(*)(uint32_t* addr, uint32_t val))(code + or32SeqCst); -#ifdef JS_64BIT - AtomicOr64SeqCst = - (uint64_t(*)(uint64_t* addr, uint64_t val))(code + or64SeqCst); -#endif - - AtomicXor8SeqCst = - (uint8_t(*)(uint8_t* addr, uint8_t val))(code + xor8SeqCst); - AtomicXor16SeqCst = - (uint16_t(*)(uint16_t* addr, uint16_t val))(code + xor16SeqCst); - AtomicXor32SeqCst = - (uint32_t(*)(uint32_t* addr, uint32_t val))(code + xor32SeqCst); -#ifdef JS_64BIT - AtomicXor64SeqCst = - (uint64_t(*)(uint64_t* addr, uint64_t val))(code + xor64SeqCst); -#endif - - codeSegment = code; - codeSegmentSize = roundedCodeLength; - - return true; -} - -void ShutDownJittedAtomics() { - // Must have been initialized. - MOZ_ASSERT(codeSegment); - - DeallocateExecutableMemory(codeSegment, codeSegmentSize); - codeSegment = nullptr; - codeSegmentSize = 0; -} - -} // jit -} // js diff --git a/js/src/jit/shared/AtomicOperations-shared-jit.h b/js/src/jit/shared/AtomicOperations-shared-jit.h deleted file mode 100644 index 5f9c54557e585..0000000000000 --- a/js/src/jit/shared/AtomicOperations-shared-jit.h +++ /dev/null @@ -1,605 +0,0 @@ -/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- - * vim: set ts=8 sts=4 et sw=4 tw=99: - * This Source Code Form is subject to the terms of the Mozilla Public - * License, v. 2.0. If a copy of the MPL was not distributed with this - * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ - -/* For overall documentation, see jit/AtomicOperations.h. - * - * NOTE CAREFULLY: This file is only applicable when we have configured a JIT - * and the JIT is for the same architecture that we're compiling the shell for. - * Simulators must use a different mechanism. - * - * See comments before the include nest near the end of jit/AtomicOperations.h - * if you didn't understand that. - */ - -#ifndef jit_shared_AtomicOperations_shared_jit_h -#define jit_shared_AtomicOperations_shared_jit_h - -#include "mozilla/Assertions.h" -#include "mozilla/Types.h" - -#include "jsapi.h" - -#include "vm/ArrayBufferObject.h" - -namespace js { -namespace jit { - -// The function pointers in this section all point to jitted code. -// -// On 32-bit systems we assume for simplicity's sake that we don't have any -// 64-bit atomic operations except cmpxchg (this is a concession to x86 but it's -// not a hardship). On 32-bit systems we therefore implement other 64-bit -// atomic operations in terms of cmpxchg along with some C++ code and a local -// reordering fence to prevent other loads and stores from being intermingled -// with operations in the implementation of the atomic. - -// `fence` performs a full memory barrier. -extern void (*AtomicFenceSeqCst)(); - -#ifndef JS_64BIT -// `compiler_fence` erects a reordering boundary for operations on the current -// thread. We use it to prevent the compiler from reordering loads and stores -// inside larger primitives that are synthesized from cmpxchg. -extern void (*AtomicCompilerFence)(); -#endif - -extern uint8_t (*AtomicLoad8SeqCst)(const uint8_t* addr); -extern uint16_t (*AtomicLoad16SeqCst)(const uint16_t* addr); -extern uint32_t (*AtomicLoad32SeqCst)(const uint32_t* addr); -#ifdef JS_64BIT -extern uint64_t (*AtomicLoad64SeqCst)(const uint64_t* addr); -#endif - -// These are access-atomic up to sizeof(uintptr_t). -extern uint8_t (*AtomicLoad8Unsynchronized)(const uint8_t* addr); -extern uint16_t (*AtomicLoad16Unsynchronized)(const uint16_t* addr); -extern uint32_t (*AtomicLoad32Unsynchronized)(const uint32_t* addr); -#ifdef JS_64BIT -extern uint64_t (*AtomicLoad64Unsynchronized)(const uint64_t* addr); -#endif - -extern uint8_t (*AtomicStore8SeqCst)(uint8_t* addr, uint8_t val); -extern uint16_t (*AtomicStore16SeqCst)(uint16_t* addr, uint16_t val); -extern uint32_t (*AtomicStore32SeqCst)(uint32_t* addr, uint32_t val); -#ifdef JS_64BIT -extern uint64_t (*AtomicStore64SeqCst)(uint64_t* addr, uint64_t val); -#endif - -// These are access-atomic up to sizeof(uintptr_t). -extern uint8_t (*AtomicStore8Unsynchronized)(uint8_t* addr, uint8_t val); -extern uint16_t (*AtomicStore16Unsynchronized)(uint16_t* addr, uint16_t val); -extern uint32_t (*AtomicStore32Unsynchronized)(uint32_t* addr, uint32_t val); -#ifdef JS_64BIT -extern uint64_t (*AtomicStore64Unsynchronized)(uint64_t* addr, uint64_t val); -#endif - -// `exchange` takes a cell address and a value. It stores it in the cell and -// returns the value previously in the cell. -extern uint8_t (*AtomicExchange8SeqCst)(uint8_t* addr, uint8_t val); -extern uint16_t (*AtomicExchange16SeqCst)(uint16_t* addr, uint16_t val); -extern uint32_t (*AtomicExchange32SeqCst)(uint32_t* addr, uint32_t val); -#ifdef JS_64BIT -extern uint64_t (*AtomicExchange64SeqCst)(uint64_t* addr, uint64_t val); -#endif - -// `add` adds a value atomically to the cell and returns the old value in the -// cell. (There is no `sub`; just add the negated value.) -extern uint8_t (*AtomicAdd8SeqCst)(uint8_t* addr, uint8_t val); -extern uint16_t (*AtomicAdd16SeqCst)(uint16_t* addr, uint16_t val); -extern uint32_t (*AtomicAdd32SeqCst)(uint32_t* addr, uint32_t val); -#ifdef JS_64BIT -extern uint64_t (*AtomicAdd64SeqCst)(uint64_t* addr, uint64_t val); -#endif - -// `and` bitwise-ands a value atomically into the cell and returns the old value -// in the cell. -extern uint8_t (*AtomicAnd8SeqCst)(uint8_t* addr, uint8_t val); -extern uint16_t (*AtomicAnd16SeqCst)(uint16_t* addr, uint16_t val); -extern uint32_t (*AtomicAnd32SeqCst)(uint32_t* addr, uint32_t val); -#ifdef JS_64BIT -extern uint64_t (*AtomicAnd64SeqCst)(uint64_t* addr, uint64_t val); -#endif - -// `or` bitwise-ors a value atomically into the cell and returns the old value -// in the cell. -extern uint8_t (*AtomicOr8SeqCst)(uint8_t* addr, uint8_t val); -extern uint16_t (*AtomicOr16SeqCst)(uint16_t* addr, uint16_t val); -extern uint32_t (*AtomicOr32SeqCst)(uint32_t* addr, uint32_t val); -#ifdef JS_64BIT -extern uint64_t (*AtomicOr64SeqCst)(uint64_t* addr, uint64_t val); -#endif - -// `xor` bitwise-xors a value atomically into the cell and returns the old value -// in the cell. -extern uint8_t (*AtomicXor8SeqCst)(uint8_t* addr, uint8_t val); -extern uint16_t (*AtomicXor16SeqCst)(uint16_t* addr, uint16_t val); -extern uint32_t (*AtomicXor32SeqCst)(uint32_t* addr, uint32_t val); -#ifdef JS_64BIT -extern uint64_t (*AtomicXor64SeqCst)(uint64_t* addr, uint64_t val); -#endif - -// `cmpxchg` takes a cell address, an expected value and a replacement value. -// If the value in the cell equals the expected value then the replacement value -// is stored in the cell. It always returns the value previously in the cell. -extern uint8_t (*AtomicCmpXchg8SeqCst)(uint8_t* addr, uint8_t oldval, uint8_t newval); -extern uint16_t (*AtomicCmpXchg16SeqCst)(uint16_t* addr, uint16_t oldval, uint16_t newval); -extern uint32_t (*AtomicCmpXchg32SeqCst)(uint32_t* addr, uint32_t oldval, uint32_t newval); -extern uint64_t (*AtomicCmpXchg64SeqCst)(uint64_t* addr, uint64_t oldval, uint64_t newval); - -// `...MemcpyDown` moves bytes toward lower addresses in memory: dest <= src. -// `...MemcpyUp` moves bytes toward higher addresses in memory: dest >= src. -extern void AtomicMemcpyDownUnsynchronized(uint8_t* dest, const uint8_t* src, size_t nbytes); -extern void AtomicMemcpyUpUnsynchronized(uint8_t* dest, const uint8_t* src, size_t nbytes); - -} } - -inline bool js::jit::AtomicOperations::hasAtomic8() { - return true; -} - -inline bool js::jit::AtomicOperations::isLockfree8() { - return true; -} - -inline void -js::jit::AtomicOperations::fenceSeqCst() { - AtomicFenceSeqCst(); -} - -#define JIT_LOADOP(T, U, loadop) \ - template<> inline T \ - AtomicOperations::loadSeqCst(T* addr) { \ - JS::AutoSuppressGCAnalysis nogc; \ - return (T)loadop((U*)addr); \ - } - -#ifndef JS_64BIT -# define JIT_LOADOP_CAS(T) \ - template<> \ - inline T \ - AtomicOperations::loadSeqCst(T* addr) { \ - JS::AutoSuppressGCAnalysis nogc; \ - AtomicCompilerFence(); \ - return (T)AtomicCmpXchg64SeqCst((uint64_t*)addr, 0, 0); \ - } -#endif // !JS_64BIT - -namespace js { -namespace jit { - -JIT_LOADOP(int8_t, uint8_t, AtomicLoad8SeqCst) -JIT_LOADOP(uint8_t, uint8_t, AtomicLoad8SeqCst) -JIT_LOADOP(int16_t, uint16_t, AtomicLoad16SeqCst) -JIT_LOADOP(uint16_t, uint16_t, AtomicLoad16SeqCst) -JIT_LOADOP(int32_t, uint32_t, AtomicLoad32SeqCst) -JIT_LOADOP(uint32_t, uint32_t, AtomicLoad32SeqCst) - -#ifdef JIT_LOADOP_CAS -JIT_LOADOP_CAS(int64_t) -JIT_LOADOP_CAS(uint64_t) -#else -JIT_LOADOP(int64_t, uint64_t, AtomicLoad64SeqCst) -JIT_LOADOP(uint64_t, uint64_t, AtomicLoad64SeqCst) -#endif - -}} - -#undef JIT_LOADOP -#undef JIT_LOADOP_CAS - -#define JIT_STOREOP(T, U, storeop) \ - template<> inline void \ - AtomicOperations::storeSeqCst(T* addr, T val) { \ - JS::AutoSuppressGCAnalysis nogc; \ - storeop((U*)addr, val); \ - } - -#ifndef JS_64BIT -# define JIT_STOREOP_CAS(T) \ - template<> \ - inline void \ - AtomicOperations::storeSeqCst(T* addr, T val) { \ - JS::AutoSuppressGCAnalysis nogc; \ - AtomicCompilerFence(); \ - T oldval = *addr; /* good initial approximation */ \ - for (;;) { \ - T nextval = (T)AtomicCmpXchg64SeqCst((uint64_t*)addr, \ - (uint64_t)oldval, \ - (uint64_t)val); \ - if (nextval == oldval) { \ - break; \ - } \ - oldval = nextval; \ - } \ - AtomicCompilerFence(); \ - } -#endif // !JS_64BIT - -namespace js { -namespace jit { - -JIT_STOREOP(int8_t, uint8_t, AtomicStore8SeqCst) -JIT_STOREOP(uint8_t, uint8_t, AtomicStore8SeqCst) -JIT_STOREOP(int16_t, uint16_t, AtomicStore16SeqCst) -JIT_STOREOP(uint16_t, uint16_t, AtomicStore16SeqCst) -JIT_STOREOP(int32_t, uint32_t, AtomicStore32SeqCst) -JIT_STOREOP(uint32_t, uint32_t, AtomicStore32SeqCst) - -#ifdef JIT_STOREOP_CAS -JIT_STOREOP_CAS(int64_t) -JIT_STOREOP_CAS(uint64_t) -#else -JIT_STOREOP(int64_t, uint64_t, AtomicStore64SeqCst) -JIT_STOREOP(uint64_t, uint64_t, AtomicStore64SeqCst) -#endif - -}} - -#undef JIT_STOREOP -#undef JIT_STOREOP_CAS - -#define JIT_EXCHANGEOP(T, U, xchgop) \ - template<> inline T \ - AtomicOperations::exchangeSeqCst(T* addr, T val) { \ - JS::AutoSuppressGCAnalysis nogc; \ - return (T)xchgop((U*)addr, (U)val); \ - } - -#ifndef JS_64BIT -# define JIT_EXCHANGEOP_CAS(T) \ - template<> inline T \ - AtomicOperations::exchangeSeqCst(T* addr, T val) { \ - JS::AutoSuppressGCAnalysis nogc; \ - AtomicCompilerFence(); \ - T oldval = *addr; \ - for (;;) { \ - T nextval = (T)AtomicCmpXchg64SeqCst((uint64_t*)addr, \ - (uint64_t)oldval, \ - (uint64_t)val); \ - if (nextval == oldval) { \ - break; \ - } \ - oldval = nextval; \ - } \ - AtomicCompilerFence(); \ - return oldval; \ - } -#endif // !JS_64BIT - -namespace js { -namespace jit { - -JIT_EXCHANGEOP(int8_t, uint8_t, AtomicExchange8SeqCst) -JIT_EXCHANGEOP(uint8_t, uint8_t, AtomicExchange8SeqCst) -JIT_EXCHANGEOP(int16_t, uint16_t, AtomicExchange16SeqCst) -JIT_EXCHANGEOP(uint16_t, uint16_t, AtomicExchange16SeqCst) -JIT_EXCHANGEOP(int32_t, uint32_t, AtomicExchange32SeqCst) -JIT_EXCHANGEOP(uint32_t, uint32_t, AtomicExchange32SeqCst) - -#ifdef JIT_EXCHANGEOP_CAS -JIT_EXCHANGEOP_CAS(int64_t) -JIT_EXCHANGEOP_CAS(uint64_t) -#else -JIT_EXCHANGEOP(int64_t, uint64_t, AtomicExchange64SeqCst) -JIT_EXCHANGEOP(uint64_t, uint64_t, AtomicExchange64SeqCst) -#endif - -}} - -#undef JIT_EXCHANGEOP -#undef JIT_EXCHANGEOP_CAS - -#define JIT_CAS(T, U, cmpxchg) \ - template<> inline T \ - AtomicOperations::compareExchangeSeqCst(T* addr, T oldval, T newval) { \ - JS::AutoSuppressGCAnalysis nogc; \ - return (T)cmpxchg((U*)addr, (U)oldval, (U)newval); \ - } - -namespace js { -namespace jit { - -JIT_CAS(int8_t, uint8_t, AtomicCmpXchg8SeqCst) -JIT_CAS(uint8_t, uint8_t, AtomicCmpXchg8SeqCst) -JIT_CAS(int16_t, uint16_t, AtomicCmpXchg16SeqCst) -JIT_CAS(uint16_t, uint16_t, AtomicCmpXchg16SeqCst) -JIT_CAS(int32_t, uint32_t, AtomicCmpXchg32SeqCst) -JIT_CAS(uint32_t, uint32_t, AtomicCmpXchg32SeqCst) -JIT_CAS(int64_t, uint64_t, AtomicCmpXchg64SeqCst) -JIT_CAS(uint64_t, uint64_t, AtomicCmpXchg64SeqCst) - -}} - -#undef JIT_CAS - -#define JIT_FETCHADDOP(T, U, xadd) \ - template<> inline T \ - AtomicOperations::fetchAddSeqCst(T* addr, T val) { \ - JS::AutoSuppressGCAnalysis nogc; \ - return (T)xadd((U*)addr, (U)val); \ - } \ - -#define JIT_FETCHSUBOP(T) \ - template<> inline T \ - AtomicOperations::fetchSubSeqCst(T* addr, T val) { \ - JS::AutoSuppressGCAnalysis nogc; \ - return fetchAddSeqCst(addr, (T)(0-val)); \ - } - -#ifndef JS_64BIT -# define JIT_FETCHADDOP_CAS(T) \ - template<> inline T \ - AtomicOperations::fetchAddSeqCst(T* addr, T val) { \ - JS::AutoSuppressGCAnalysis nogc; \ - AtomicCompilerFence(); \ - T oldval = *addr; /* Good initial approximation */ \ - for (;;) { \ - T nextval = (T)AtomicCmpXchg64SeqCst((uint64_t*)addr, \ - (uint64_t)oldval, \ - (uint64_t)(oldval + val)); \ - if (nextval == oldval) { \ - break; \ - } \ - oldval = nextval; \ - } \ - AtomicCompilerFence(); \ - return oldval; \ - } -#endif // !JS_64BIT - -namespace js { -namespace jit { - -JIT_FETCHADDOP(int8_t, uint8_t, AtomicAdd8SeqCst) -JIT_FETCHADDOP(uint8_t, uint8_t, AtomicAdd8SeqCst) -JIT_FETCHADDOP(int16_t, uint16_t, AtomicAdd16SeqCst) -JIT_FETCHADDOP(uint16_t, uint16_t, AtomicAdd16SeqCst) -JIT_FETCHADDOP(int32_t, uint32_t, AtomicAdd32SeqCst) -JIT_FETCHADDOP(uint32_t, uint32_t, AtomicAdd32SeqCst) - -#ifdef JIT_FETCHADDOP_CAS -JIT_FETCHADDOP_CAS(int64_t) -JIT_FETCHADDOP_CAS(uint64_t) -#else -JIT_FETCHADDOP(int64_t, uint64_t, AtomicAdd64SeqCst) -JIT_FETCHADDOP(uint64_t, uint64_t, AtomicAdd64SeqCst) -#endif - -JIT_FETCHSUBOP(int8_t) -JIT_FETCHSUBOP(uint8_t) -JIT_FETCHSUBOP(int16_t) -JIT_FETCHSUBOP(uint16_t) -JIT_FETCHSUBOP(int32_t) -JIT_FETCHSUBOP(uint32_t) -JIT_FETCHSUBOP(int64_t) -JIT_FETCHSUBOP(uint64_t) - -}} - -#undef JIT_FETCHADDOP -#undef JIT_FETCHADDOP_CAS -#undef JIT_FETCHSUBOP - -#define JIT_FETCHBITOPX(T, U, name, op) \ - template<> inline T \ - AtomicOperations::name(T* addr, T val) { \ - JS::AutoSuppressGCAnalysis nogc; \ - return (T)op((U *)addr, (U)val); \ - } - -#define JIT_FETCHBITOP(T, U, andop, orop, xorop) \ - JIT_FETCHBITOPX(T, U, fetchAndSeqCst, andop) \ - JIT_FETCHBITOPX(T, U, fetchOrSeqCst, orop) \ - JIT_FETCHBITOPX(T, U, fetchXorSeqCst, xorop) - -#ifndef JS_64BIT - -# define AND_OP & -# define OR_OP | -# define XOR_OP ^ - -# define JIT_FETCHBITOPX_CAS(T, name, OP) \ - template<> inline T \ - AtomicOperations::name(T* addr, T val) { \ - JS::AutoSuppressGCAnalysis nogc; \ - AtomicCompilerFence(); \ - T oldval = *addr; \ - for (;;) { \ - T nextval = (T)AtomicCmpXchg64SeqCst((uint64_t*)addr, \ - (uint64_t)oldval, \ - (uint64_t)(oldval OP val)); \ - if (nextval == oldval) { \ - break; \ - } \ - oldval = nextval; \ - } \ - AtomicCompilerFence(); \ - return oldval; \ - } - -# define JIT_FETCHBITOP_CAS(T) \ - JIT_FETCHBITOPX_CAS(T, fetchAndSeqCst, AND_OP) \ - JIT_FETCHBITOPX_CAS(T, fetchOrSeqCst, OR_OP) \ - JIT_FETCHBITOPX_CAS(T, fetchXorSeqCst, XOR_OP) - -#endif // !JS_64BIT - -namespace js { -namespace jit { - -JIT_FETCHBITOP(int8_t, uint8_t, AtomicAnd8SeqCst, AtomicOr8SeqCst, AtomicXor8SeqCst) -JIT_FETCHBITOP(uint8_t, uint8_t, AtomicAnd8SeqCst, AtomicOr8SeqCst, AtomicXor8SeqCst) -JIT_FETCHBITOP(int16_t, uint16_t, AtomicAnd16SeqCst, AtomicOr16SeqCst, AtomicXor16SeqCst) -JIT_FETCHBITOP(uint16_t, uint16_t, AtomicAnd16SeqCst, AtomicOr16SeqCst, AtomicXor16SeqCst) -JIT_FETCHBITOP(int32_t, uint32_t, AtomicAnd32SeqCst, AtomicOr32SeqCst, AtomicXor32SeqCst) -JIT_FETCHBITOP(uint32_t, uint32_t, AtomicAnd32SeqCst, AtomicOr32SeqCst, AtomicXor32SeqCst) - -#ifdef JIT_FETCHBITOP_CAS -JIT_FETCHBITOP_CAS(int64_t) -JIT_FETCHBITOP_CAS(uint64_t) -#else -JIT_FETCHBITOP(int64_t, uint64_t, AtomicAnd64SeqCst, AtomicOr64SeqCst, AtomicXor64SeqCst) -JIT_FETCHBITOP(uint64_t, uint64_t, AtomicAnd64SeqCst, AtomicOr64SeqCst, AtomicXor64SeqCst) -#endif - -}} - -#undef JIT_FETCHBITOPX_CAS -#undef JIT_FETCHBITOPX -#undef JIT_FETCHBITOP_CAS -#undef JIT_FETCHBITOP - -#define JIT_LOADSAFE(T, U, loadop) \ - template<> \ - inline T \ - js::jit::AtomicOperations::loadSafeWhenRacy(T* addr) { \ - JS::AutoSuppressGCAnalysis nogc; \ - union { U u; T t; }; \ - u = loadop((U*)addr); \ - return t; \ - } - -#ifndef JS_64BIT -# define JIT_LOADSAFE_TEARING(T) \ - template<> \ - inline T \ - js::jit::AtomicOperations::loadSafeWhenRacy(T* addr) { \ - JS::AutoSuppressGCAnalysis nogc; \ - MOZ_ASSERT(sizeof(T) == 8); \ - union { uint32_t u[2]; T t; }; \ - uint32_t* ptr = (uint32_t*)addr; \ - u[0] = AtomicLoad32Unsynchronized(ptr); \ - u[1] = AtomicLoad32Unsynchronized(ptr + 1); \ - return t; \ - } -#endif // !JS_64BIT - -namespace js { -namespace jit { - -JIT_LOADSAFE(int8_t, uint8_t, AtomicLoad8Unsynchronized) -JIT_LOADSAFE(uint8_t, uint8_t, AtomicLoad8Unsynchronized) -JIT_LOADSAFE(int16_t, uint16_t, AtomicLoad16Unsynchronized) -JIT_LOADSAFE(uint16_t, uint16_t, AtomicLoad16Unsynchronized) -JIT_LOADSAFE(int32_t, uint32_t, AtomicLoad32Unsynchronized) -JIT_LOADSAFE(uint32_t, uint32_t, AtomicLoad32Unsynchronized) -#ifdef JIT_LOADSAFE_TEARING -JIT_LOADSAFE_TEARING(int64_t) -JIT_LOADSAFE_TEARING(uint64_t) -JIT_LOADSAFE_TEARING(double) -#else -JIT_LOADSAFE(int64_t, uint64_t, AtomicLoad64Unsynchronized) -JIT_LOADSAFE(uint64_t, uint64_t, AtomicLoad64Unsynchronized) -JIT_LOADSAFE(double, uint64_t, AtomicLoad64Unsynchronized) -#endif -JIT_LOADSAFE(float, uint32_t, AtomicLoad32Unsynchronized) - -// Clang requires a specialization for uint8_clamped. -template<> -inline uint8_clamped js::jit::AtomicOperations::loadSafeWhenRacy( - uint8_clamped* addr) { - return uint8_clamped(loadSafeWhenRacy((uint8_t*)addr)); -} - -}} - -#undef JIT_LOADSAFE -#undef JIT_LOADSAFE_TEARING - -#define JIT_STORESAFE(T, U, storeop) \ - template<> \ - inline void \ - js::jit::AtomicOperations::storeSafeWhenRacy(T* addr, T val) { \ - JS::AutoSuppressGCAnalysis nogc; \ - union { U u; T t; }; \ - t = val; \ - storeop((U*)addr, u); \ - } - -#ifndef JS_64BIT -# define JIT_STORESAFE_TEARING(T) \ - template<> \ - inline void \ - js::jit::AtomicOperations::storeSafeWhenRacy(T* addr, T val) { \ - JS::AutoSuppressGCAnalysis nogc; \ - union { uint32_t u[2]; T t; }; \ - t = val; \ - uint32_t* ptr = (uint32_t*)addr; \ - AtomicStore32Unsynchronized(ptr, u[0]); \ - AtomicStore32Unsynchronized(ptr + 1, u[1]); \ - } -#endif // !JS_64BIT - -namespace js { -namespace jit { - -JIT_STORESAFE(int8_t, uint8_t, AtomicStore8Unsynchronized) -JIT_STORESAFE(uint8_t, uint8_t, AtomicStore8Unsynchronized) -JIT_STORESAFE(int16_t, uint16_t, AtomicStore16Unsynchronized) -JIT_STORESAFE(uint16_t, uint16_t, AtomicStore16Unsynchronized) -JIT_STORESAFE(int32_t, uint32_t, AtomicStore32Unsynchronized) -JIT_STORESAFE(uint32_t, uint32_t, AtomicStore32Unsynchronized) -#ifdef JIT_STORESAFE_TEARING -JIT_STORESAFE_TEARING(int64_t) -JIT_STORESAFE_TEARING(uint64_t) -JIT_STORESAFE_TEARING(double) -#else -JIT_STORESAFE(int64_t, uint64_t, AtomicStore64Unsynchronized) -JIT_STORESAFE(uint64_t, uint64_t, AtomicStore64Unsynchronized) -JIT_STORESAFE(double, uint64_t, AtomicStore64Unsynchronized) -#endif -JIT_STORESAFE(float, uint32_t, AtomicStore32Unsynchronized) - -// Clang requires a specialization for uint8_clamped. -template<> -inline void js::jit::AtomicOperations::storeSafeWhenRacy(uint8_clamped* addr, - uint8_clamped val) { - storeSafeWhenRacy((uint8_t*)addr, (uint8_t)val); -} - -}} - -#undef JIT_STORESAFE -#undef JIT_STORESAFE_TEARING - -void js::jit::AtomicOperations::memcpySafeWhenRacy(void* dest, const void* src, - size_t nbytes) { - JS::AutoSuppressGCAnalysis nogc; - MOZ_ASSERT(!((char*)dest <= (char*)src && (char*)src < (char*)dest+nbytes)); - MOZ_ASSERT(!((char*)src <= (char*)dest && (char*)dest < (char*)src+nbytes)); - AtomicMemcpyDownUnsynchronized((uint8_t*)dest, (const uint8_t*)src, nbytes); -} - -inline void js::jit::AtomicOperations::memmoveSafeWhenRacy(void* dest, - const void* src, - size_t nbytes) { - JS::AutoSuppressGCAnalysis nogc; - if ((char*)dest <= (char*)src) { - AtomicMemcpyDownUnsynchronized((uint8_t*)dest, (const uint8_t*)src, - nbytes); - } else { - AtomicMemcpyUpUnsynchronized((uint8_t*)dest, (const uint8_t*)src, - nbytes); - } -} - -namespace js { -namespace jit { - -extern bool InitializeJittedAtomics(); -extern void ShutDownJittedAtomics(); - -}} - -inline bool js::jit::AtomicOperations::Initialize() { - return InitializeJittedAtomics(); -} - -inline void js::jit::AtomicOperations::ShutDown() { - ShutDownJittedAtomics(); -} - -#endif // jit_shared_AtomicOperations_shared_jit_h diff --git a/js/src/jit/x64/MacroAssembler-x64.cpp b/js/src/jit/x64/MacroAssembler-x64.cpp index 73abbc2a36a20..e765e08631ba6 100644 --- a/js/src/jit/x64/MacroAssembler-x64.cpp +++ b/js/src/jit/x64/MacroAssembler-x64.cpp @@ -930,33 +930,27 @@ void MacroAssembler::wasmAtomicExchange64(const wasm::MemoryAccessDesc& access, } template -static void AtomicFetchOp64(MacroAssembler& masm, - const wasm::MemoryAccessDesc* access, AtomicOp op, - Register value, const T& mem, Register temp, - Register output) { +static void WasmAtomicFetchOp64(MacroAssembler& masm, + const wasm::MemoryAccessDesc access, + AtomicOp op, Register value, const T& mem, + Register temp, Register output) { if (op == AtomicFetchAddOp) { if (value != output) { masm.movq(value, output); } - if (access) { - masm.append(*access, masm.size()); - } + masm.append(access, masm.size()); masm.lock_xaddq(output, Operand(mem)); } else if (op == AtomicFetchSubOp) { if (value != output) { masm.movq(value, output); } masm.negq(output); - if (access) { - masm.append(*access, masm.size()); - } + masm.append(access, masm.size()); masm.lock_xaddq(output, Operand(mem)); } else { Label again; MOZ_ASSERT(output == rax); - if (access) { - masm.append(*access, masm.size()); - } + masm.append(access, masm.size()); masm.movq(Operand(mem), rax); masm.bind(&again); masm.movq(rax, temp); @@ -982,14 +976,14 @@ void MacroAssembler::wasmAtomicFetchOp64(const wasm::MemoryAccessDesc& access, AtomicOp op, Register64 value, const Address& mem, Register64 temp, Register64 output) { - AtomicFetchOp64(*this, &access, op, value.reg, mem, temp.reg, output.reg); + WasmAtomicFetchOp64(*this, access, op, value.reg, mem, temp.reg, output.reg); } void MacroAssembler::wasmAtomicFetchOp64(const wasm::MemoryAccessDesc& access, AtomicOp op, Register64 value, const BaseIndex& mem, Register64 temp, Register64 output) { - AtomicFetchOp64(*this, &access, op, value.reg, mem, temp.reg, output.reg); + WasmAtomicFetchOp64(*this, access, op, value.reg, mem, temp.reg, output.reg); } void MacroAssembler::wasmAtomicEffectOp64(const wasm::MemoryAccessDesc& access, @@ -1017,30 +1011,4 @@ void MacroAssembler::wasmAtomicEffectOp64(const wasm::MemoryAccessDesc& access, } } -void MacroAssembler::compareExchange64(const Synchronization&, - const Address& mem, Register64 expected, - Register64 replacement, - Register64 output) { - MOZ_ASSERT(output.reg == rax); - if (expected != output) { - movq(expected.reg, output.reg); - } - lock_cmpxchgq(replacement.reg, Operand(mem)); -} - -void MacroAssembler::atomicExchange64(const Synchronization&, - const Address& mem, Register64 value, - Register64 output) { - if (value != output) { - movq(value.reg, output.reg); - } - xchgq(output.reg, Operand(mem)); -} - -void MacroAssembler::atomicFetchOp64(const Synchronization& sync, AtomicOp op, - Register64 value, const Address& mem, - Register64 temp, Register64 output) { - AtomicFetchOp64(*this, nullptr, op, value.reg, mem, temp.reg, output.reg); -} - //}}} check_macroassembler_style diff --git a/js/src/jit/x86-shared/Assembler-x86-shared.h b/js/src/jit/x86-shared/Assembler-x86-shared.h index b9c5d3f3bc7c3..15a35d7ac15df 100644 --- a/js/src/jit/x86-shared/Assembler-x86-shared.h +++ b/js/src/jit/x86-shared/Assembler-x86-shared.h @@ -209,19 +209,6 @@ class CPUInfo { static void SetSSEVersion(); - // The flags can become set at startup when we JIT non-JS code eagerly; thus - // we reset the flags before setting any flags explicitly during testing, so - // that the flags can be in a consistent state. - - static void reset() { - maxSSEVersion = UnknownSSE; - maxEnabledSSEVersion = UnknownSSE; - avxPresent = false; - avxEnabled = false; - popcntPresent = false; - needAmdBugWorkaround = false; - } - public: static bool IsSSE2Present() { #ifdef JS_CODEGEN_X64 @@ -241,19 +228,14 @@ class CPUInfo { static bool NeedAmdBugWorkaround() { return needAmdBugWorkaround; } static void SetSSE3Disabled() { - reset(); maxEnabledSSEVersion = SSE2; avxEnabled = false; } static void SetSSE4Disabled() { - reset(); maxEnabledSSEVersion = SSSE3; avxEnabled = false; } - static void SetAVXEnabled() { - reset(); - avxEnabled = true; - } + static void SetAVXEnabled() { avxEnabled = true; } }; class AssemblerX86Shared : public AssemblerShared { diff --git a/js/src/jit/x86-shared/AtomicOperations-x86-shared-gcc.h b/js/src/jit/x86-shared/AtomicOperations-x86-shared-gcc.h new file mode 100644 index 0000000000000..ddf8c61a7eb13 --- /dev/null +++ b/js/src/jit/x86-shared/AtomicOperations-x86-shared-gcc.h @@ -0,0 +1,235 @@ +/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- + * vim: set ts=8 sts=2 et sw=2 tw=80: + * This Source Code Form is subject to the terms of the Mozilla Public + * License, v. 2.0. If a copy of the MPL was not distributed with this + * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ + +/* For overall documentation, see jit/AtomicOperations.h */ + +#ifndef jit_shared_AtomicOperations_x86_shared_gcc_h +#define jit_shared_AtomicOperations_x86_shared_gcc_h + +#include "mozilla/Assertions.h" +#include "mozilla/Types.h" + +#include "vm/ArrayBufferObject.h" + +#if !defined(__clang__) && !defined(__GNUC__) +# error "This file only for gcc-compatible compilers" +#endif + +// Lock-freedom and access-atomicity on x86 and x64. +// +// In general, aligned accesses are access-atomic up to 8 bytes ever since the +// Pentium; Firefox requires SSE2, which was introduced with the Pentium 4, so +// we may assume access-atomicity. +// +// Four-byte accesses and smaller are simple: +// - Use MOV{B,W,L} to load and store. Stores require a post-fence +// for sequential consistency as defined by the JS spec. The fence +// can be MFENCE, or the store can be implemented using XCHG. +// - For compareExchange use LOCK; CMPXCGH{B,W,L} +// - For exchange, use XCHG{B,W,L} +// - For add, etc use LOCK; ADD{B,W,L} etc +// +// Eight-byte accesses are easy on x64: +// - Use MOVQ to load and store (again with a fence for the store) +// - For compareExchange, we use CMPXCHGQ +// - For exchange, we use XCHGQ +// - For add, etc use LOCK; ADDQ etc +// +// Eight-byte accesses are harder on x86: +// - For load, use a sequence of MOVL + CMPXCHG8B +// - For store, use a sequence of MOVL + a CMPXCGH8B in a loop, +// no additional fence required +// - For exchange, do as for store +// - For add, etc do as for store + +// Firefox requires gcc > 4.8, so we will always have the __atomic intrinsics +// added for use in C++11 . +// +// Note that using these intrinsics for most operations is not correct: the code +// has undefined behavior. The gcc documentation states that the compiler +// assumes the code is race free. This supposedly means C++ will allow some +// instruction reorderings (effectively those allowed by TSO) even for seq_cst +// ordered operations, but these reorderings are not allowed by JS. To do +// better we will end up with inline assembler or JIT-generated code. + +// For now, we require that the C++ compiler's atomics are lock free, even for +// 64-bit accesses. + +// When compiling with Clang on 32-bit linux it will be necessary to link with +// -latomic to get the proper 64-bit intrinsics. + +inline bool js::jit::AtomicOperations::hasAtomic8() { return true; } + +inline bool js::jit::AtomicOperations::isLockfree8() { + MOZ_ASSERT(__atomic_always_lock_free(sizeof(int8_t), 0)); + MOZ_ASSERT(__atomic_always_lock_free(sizeof(int16_t), 0)); + MOZ_ASSERT(__atomic_always_lock_free(sizeof(int32_t), 0)); + MOZ_ASSERT(__atomic_always_lock_free(sizeof(int64_t), 0)); + return true; +} + +inline void js::jit::AtomicOperations::fenceSeqCst() { + __atomic_thread_fence(__ATOMIC_SEQ_CST); +} + +template +inline T js::jit::AtomicOperations::loadSeqCst(T* addr) { + MOZ_ASSERT(tier1Constraints(addr)); + T v; + __atomic_load(addr, &v, __ATOMIC_SEQ_CST); + return v; +} + +template +inline void js::jit::AtomicOperations::storeSeqCst(T* addr, T val) { + MOZ_ASSERT(tier1Constraints(addr)); + __atomic_store(addr, &val, __ATOMIC_SEQ_CST); +} + +template +inline T js::jit::AtomicOperations::exchangeSeqCst(T* addr, T val) { + MOZ_ASSERT(tier1Constraints(addr)); + T v; + __atomic_exchange(addr, &val, &v, __ATOMIC_SEQ_CST); + return v; +} + +template +inline T js::jit::AtomicOperations::compareExchangeSeqCst(T* addr, T oldval, + T newval) { + MOZ_ASSERT(tier1Constraints(addr)); + __atomic_compare_exchange(addr, &oldval, &newval, false, __ATOMIC_SEQ_CST, + __ATOMIC_SEQ_CST); + return oldval; +} + +template +inline T js::jit::AtomicOperations::fetchAddSeqCst(T* addr, T val) { + MOZ_ASSERT(tier1Constraints(addr)); + return __atomic_fetch_add(addr, val, __ATOMIC_SEQ_CST); +} + +template +inline T js::jit::AtomicOperations::fetchSubSeqCst(T* addr, T val) { + MOZ_ASSERT(tier1Constraints(addr)); + return __atomic_fetch_sub(addr, val, __ATOMIC_SEQ_CST); +} + +template +inline T js::jit::AtomicOperations::fetchAndSeqCst(T* addr, T val) { + MOZ_ASSERT(tier1Constraints(addr)); + return __atomic_fetch_and(addr, val, __ATOMIC_SEQ_CST); +} + +template +inline T js::jit::AtomicOperations::fetchOrSeqCst(T* addr, T val) { + MOZ_ASSERT(tier1Constraints(addr)); + return __atomic_fetch_or(addr, val, __ATOMIC_SEQ_CST); +} + +template +inline T js::jit::AtomicOperations::fetchXorSeqCst(T* addr, T val) { + MOZ_ASSERT(tier1Constraints(addr)); + return __atomic_fetch_xor(addr, val, __ATOMIC_SEQ_CST); +} + +template +inline T js::jit::AtomicOperations::loadSafeWhenRacy(T* addr) { + MOZ_ASSERT(tier1Constraints(addr)); + T v; + __atomic_load(addr, &v, __ATOMIC_RELAXED); + return v; +} + +namespace js { +namespace jit { + +#define GCC_RACYLOADOP(T) \ + template <> \ + inline T js::jit::AtomicOperations::loadSafeWhenRacy(T* addr) { \ + return *addr; \ + } + +// On 32-bit platforms, loadSafeWhenRacy need not be access-atomic for 64-bit +// data, so just use regular accesses instead of the expensive __atomic_load +// solution which must use CMPXCHG8B. +#ifndef JS_64BIT +GCC_RACYLOADOP(int64_t) +GCC_RACYLOADOP(uint64_t) +#endif + +// Float and double accesses are not access-atomic. +GCC_RACYLOADOP(float) +GCC_RACYLOADOP(double) + +// Clang requires a specialization for uint8_clamped. +template <> +inline uint8_clamped js::jit::AtomicOperations::loadSafeWhenRacy( + uint8_clamped* addr) { + uint8_t v; + __atomic_load(&addr->val, &v, __ATOMIC_RELAXED); + return uint8_clamped(v); +} + +#undef GCC_RACYLOADOP + +} // namespace jit +} // namespace js + +template +inline void js::jit::AtomicOperations::storeSafeWhenRacy(T* addr, T val) { + MOZ_ASSERT(tier1Constraints(addr)); + __atomic_store(addr, &val, __ATOMIC_RELAXED); +} + +namespace js { +namespace jit { + +#define GCC_RACYSTOREOP(T) \ + template <> \ + inline void js::jit::AtomicOperations::storeSafeWhenRacy(T* addr, T val) { \ + *addr = val; \ + } + +// On 32-bit platforms, storeSafeWhenRacy need not be access-atomic for 64-bit +// data, so just use regular accesses instead of the expensive __atomic_store +// solution which must use CMPXCHG8B. +#ifndef JS_64BIT +GCC_RACYSTOREOP(int64_t) +GCC_RACYSTOREOP(uint64_t) +#endif + +// Float and double accesses are not access-atomic. +GCC_RACYSTOREOP(float) +GCC_RACYSTOREOP(double) + +// Clang requires a specialization for uint8_clamped. +template <> +inline void js::jit::AtomicOperations::storeSafeWhenRacy(uint8_clamped* addr, + uint8_clamped val) { + __atomic_store(&addr->val, &val.val, __ATOMIC_RELAXED); +} + +#undef GCC_RACYSTOREOP + +} // namespace jit +} // namespace js + +inline void js::jit::AtomicOperations::memcpySafeWhenRacy(void* dest, + const void* src, + size_t nbytes) { + MOZ_ASSERT(!((char*)dest <= (char*)src && (char*)src < (char*)dest + nbytes)); + MOZ_ASSERT(!((char*)src <= (char*)dest && (char*)dest < (char*)src + nbytes)); + ::memcpy(dest, src, nbytes); +} + +inline void js::jit::AtomicOperations::memmoveSafeWhenRacy(void* dest, + const void* src, + size_t nbytes) { + ::memmove(dest, src, nbytes); +} + +#endif // jit_shared_AtomicOperations_x86_shared_gcc_h diff --git a/js/src/jit/x86-shared/AtomicOperations-x86-shared-msvc.h b/js/src/jit/x86-shared/AtomicOperations-x86-shared-msvc.h new file mode 100644 index 0000000000000..c0b5a0f0a50c4 --- /dev/null +++ b/js/src/jit/x86-shared/AtomicOperations-x86-shared-msvc.h @@ -0,0 +1,367 @@ +/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- + * vim: set ts=8 sts=2 et sw=2 tw=80: + * This Source Code Form is subject to the terms of the Mozilla Public + * License, v. 2.0. If a copy of the MPL was not distributed with this + * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ + +#ifndef jit_shared_AtomicOperations_x86_shared_msvc_h +#define jit_shared_AtomicOperations_x86_shared_msvc_h + +#include "mozilla/Assertions.h" +#include "mozilla/Types.h" + +#if !defined(_MSC_VER) +# error "This file only for Microsoft Visual C++" +#endif + +// For overall documentation, see jit/AtomicOperations.h/ +// +// For general comments on lock-freedom, access-atomicity, and related matters +// on x86 and x64, notably for justification of the implementations of the +// 64-bit primitives on 32-bit systems, see the comment block in +// AtomicOperations-x86-shared-gcc.h. + +// Below, _ReadWriteBarrier is a compiler directive, preventing reordering of +// instructions and reuse of memory values across it in the compiler, but having +// no impact on what the CPU does. + +// Note, here we use MSVC intrinsics directly. But MSVC supports a slightly +// higher level of function which uses the intrinsic when possible (8, 16, and +// 32-bit operations, and 64-bit operations on 64-bit systems) and otherwise +// falls back on CMPXCHG8B for 64-bit operations on 32-bit systems. We could be +// using those functions in many cases here (though not all). I have not done +// so because (a) I don't yet know how far back those functions are supported +// and (b) I expect we'll end up dropping into assembler here eventually so as +// to guarantee that the C++ compiler won't optimize the code. + +// Note, _InterlockedCompareExchange takes the *new* value as the second +// argument and the *comparand* (expected old value) as the third argument. + +inline bool js::jit::AtomicOperations::hasAtomic8() { return true; } + +inline bool js::jit::AtomicOperations::isLockfree8() { + // The MSDN docs suggest very strongly that if code is compiled for Pentium + // or better the 64-bit primitives will be lock-free, see eg the "Remarks" + // secion of the page for _InterlockedCompareExchange64, currently here: + // https://msdn.microsoft.com/en-us/library/ttk2z1ws%28v=vs.85%29.aspx + // + // But I've found no way to assert that at compile time or run time, there + // appears to be no WinAPI is_lock_free() test. + + return true; +} + +inline void js::jit::AtomicOperations::fenceSeqCst() { + _ReadWriteBarrier(); + _mm_mfence(); +} + +template +inline T js::jit::AtomicOperations::loadSeqCst(T* addr) { + MOZ_ASSERT(tier1Constraints(addr)); + _ReadWriteBarrier(); + T v = *addr; + _ReadWriteBarrier(); + return v; +} + +#ifdef _M_IX86 +namespace js { +namespace jit { + +# define MSC_LOADOP(T) \ + template <> \ + inline T AtomicOperations::loadSeqCst(T* addr) { \ + MOZ_ASSERT(tier1Constraints(addr)); \ + _ReadWriteBarrier(); \ + return (T)_InterlockedCompareExchange64((__int64 volatile*)addr, 0, 0); \ + } + +MSC_LOADOP(int64_t) +MSC_LOADOP(uint64_t) + +# undef MSC_LOADOP + +} // namespace jit +} // namespace js +#endif // _M_IX86 + +template +inline void js::jit::AtomicOperations::storeSeqCst(T* addr, T val) { + MOZ_ASSERT(tier1Constraints(addr)); + _ReadWriteBarrier(); + *addr = val; + fenceSeqCst(); +} + +#ifdef _M_IX86 +namespace js { +namespace jit { + +# define MSC_STOREOP(T) \ + template <> \ + inline void AtomicOperations::storeSeqCst(T* addr, T val) { \ + MOZ_ASSERT(tier1Constraints(addr)); \ + _ReadWriteBarrier(); \ + T oldval = *addr; \ + for (;;) { \ + T nextval = (T)_InterlockedCompareExchange64( \ + (__int64 volatile*)addr, (__int64)val, (__int64)oldval); \ + if (nextval == oldval) break; \ + oldval = nextval; \ + } \ + _ReadWriteBarrier(); \ + } + +MSC_STOREOP(int64_t) +MSC_STOREOP(uint64_t) + +# undef MSC_STOREOP + +} // namespace jit +} // namespace js +#endif // _M_IX86 + +#define MSC_EXCHANGEOP(T, U, xchgop) \ + template <> \ + inline T AtomicOperations::exchangeSeqCst(T* addr, T val) { \ + MOZ_ASSERT(tier1Constraints(addr)); \ + return (T)xchgop((U volatile*)addr, (U)val); \ + } + +#ifdef _M_IX86 +# define MSC_EXCHANGEOP_CAS(T) \ + template <> \ + inline T AtomicOperations::exchangeSeqCst(T* addr, T val) { \ + MOZ_ASSERT(tier1Constraints(addr)); \ + _ReadWriteBarrier(); \ + T oldval = *addr; \ + for (;;) { \ + T nextval = (T)_InterlockedCompareExchange64( \ + (__int64 volatile*)addr, (__int64)val, (__int64)oldval); \ + if (nextval == oldval) break; \ + oldval = nextval; \ + } \ + _ReadWriteBarrier(); \ + return oldval; \ + } +#endif // _M_IX86 + +namespace js { +namespace jit { + +MSC_EXCHANGEOP(int8_t, char, _InterlockedExchange8) +MSC_EXCHANGEOP(uint8_t, char, _InterlockedExchange8) +MSC_EXCHANGEOP(int16_t, short, _InterlockedExchange16) +MSC_EXCHANGEOP(uint16_t, short, _InterlockedExchange16) +MSC_EXCHANGEOP(int32_t, long, _InterlockedExchange) +MSC_EXCHANGEOP(uint32_t, long, _InterlockedExchange) + +#ifdef _M_IX86 +MSC_EXCHANGEOP_CAS(int64_t) +MSC_EXCHANGEOP_CAS(uint64_t) +#else +MSC_EXCHANGEOP(int64_t, __int64, _InterlockedExchange64) +MSC_EXCHANGEOP(uint64_t, __int64, _InterlockedExchange64) +#endif + +} // namespace jit +} // namespace js + +#undef MSC_EXCHANGEOP +#undef MSC_EXCHANGEOP_CAS + +#define MSC_CAS(T, U, cmpxchg) \ + template <> \ + inline T AtomicOperations::compareExchangeSeqCst(T* addr, T oldval, \ + T newval) { \ + MOZ_ASSERT(tier1Constraints(addr)); \ + return (T)cmpxchg((U volatile*)addr, (U)newval, (U)oldval); \ + } + +namespace js { +namespace jit { + +MSC_CAS(int8_t, char, _InterlockedCompareExchange8) +MSC_CAS(uint8_t, char, _InterlockedCompareExchange8) +MSC_CAS(int16_t, short, _InterlockedCompareExchange16) +MSC_CAS(uint16_t, short, _InterlockedCompareExchange16) +MSC_CAS(int32_t, long, _InterlockedCompareExchange) +MSC_CAS(uint32_t, long, _InterlockedCompareExchange) +MSC_CAS(int64_t, __int64, _InterlockedCompareExchange64) +MSC_CAS(uint64_t, __int64, _InterlockedCompareExchange64) + +} // namespace jit +} // namespace js + +#undef MSC_CAS + +#define MSC_FETCHADDOP(T, U, xadd) \ + template <> \ + inline T AtomicOperations::fetchAddSeqCst(T* addr, T val) { \ + MOZ_ASSERT(tier1Constraints(addr)); \ + return (T)xadd((U volatile*)addr, (U)val); \ + } + +#define MSC_FETCHSUBOP(T) \ + template <> \ + inline T AtomicOperations::fetchSubSeqCst(T* addr, T val) { \ + return fetchAddSeqCst(addr, (T)(0 - val)); \ + } + +#ifdef _M_IX86 +# define MSC_FETCHADDOP_CAS(T) \ + template <> \ + inline T AtomicOperations::fetchAddSeqCst(T* addr, T val) { \ + MOZ_ASSERT(tier1Constraints(addr)); \ + _ReadWriteBarrier(); \ + T oldval = *addr; \ + for (;;) { \ + T nextval = (T)_InterlockedCompareExchange64((__int64 volatile*)addr, \ + (__int64)(oldval + val), \ + (__int64)oldval); \ + if (nextval == oldval) break; \ + oldval = nextval; \ + } \ + _ReadWriteBarrier(); \ + return oldval; \ + } +#endif // _M_IX86 + +namespace js { +namespace jit { + +MSC_FETCHADDOP(int8_t, char, _InterlockedExchangeAdd8) +MSC_FETCHADDOP(uint8_t, char, _InterlockedExchangeAdd8) +MSC_FETCHADDOP(int16_t, short, _InterlockedExchangeAdd16) +MSC_FETCHADDOP(uint16_t, short, _InterlockedExchangeAdd16) +MSC_FETCHADDOP(int32_t, long, _InterlockedExchangeAdd) +MSC_FETCHADDOP(uint32_t, long, _InterlockedExchangeAdd) + +#ifdef _M_IX86 +MSC_FETCHADDOP_CAS(int64_t) +MSC_FETCHADDOP_CAS(uint64_t) +#else +MSC_FETCHADDOP(int64_t, __int64, _InterlockedExchangeAdd64) +MSC_FETCHADDOP(uint64_t, __int64, _InterlockedExchangeAdd64) +#endif + +MSC_FETCHSUBOP(int8_t) +MSC_FETCHSUBOP(uint8_t) +MSC_FETCHSUBOP(int16_t) +MSC_FETCHSUBOP(uint16_t) +MSC_FETCHSUBOP(int32_t) +MSC_FETCHSUBOP(uint32_t) +MSC_FETCHSUBOP(int64_t) +MSC_FETCHSUBOP(uint64_t) + +} // namespace jit +} // namespace js + +#undef MSC_FETCHADDOP +#undef MSC_FETCHADDOP_CAS +#undef MSC_FETCHSUBOP + +#define MSC_FETCHBITOPX(T, U, name, op) \ + template <> \ + inline T AtomicOperations::name(T* addr, T val) { \ + MOZ_ASSERT(tier1Constraints(addr)); \ + return (T)op((U volatile*)addr, (U)val); \ + } + +#define MSC_FETCHBITOP(T, U, andop, orop, xorop) \ + MSC_FETCHBITOPX(T, U, fetchAndSeqCst, andop) \ + MSC_FETCHBITOPX(T, U, fetchOrSeqCst, orop) \ + MSC_FETCHBITOPX(T, U, fetchXorSeqCst, xorop) + +#ifdef _M_IX86 +# define AND_OP & +# define OR_OP | +# define XOR_OP ^ +# define MSC_FETCHBITOPX_CAS(T, name, OP) \ + template <> \ + inline T AtomicOperations::name(T* addr, T val) { \ + MOZ_ASSERT(tier1Constraints(addr)); \ + _ReadWriteBarrier(); \ + T oldval = *addr; \ + for (;;) { \ + T nextval = (T)_InterlockedCompareExchange64((__int64 volatile*)addr, \ + (__int64)(oldval OP val), \ + (__int64)oldval); \ + if (nextval == oldval) break; \ + oldval = nextval; \ + } \ + _ReadWriteBarrier(); \ + return oldval; \ + } + +# define MSC_FETCHBITOP_CAS(T) \ + MSC_FETCHBITOPX_CAS(T, fetchAndSeqCst, AND_OP) \ + MSC_FETCHBITOPX_CAS(T, fetchOrSeqCst, OR_OP) \ + MSC_FETCHBITOPX_CAS(T, fetchXorSeqCst, XOR_OP) + +#endif + +namespace js { +namespace jit { + +MSC_FETCHBITOP(int8_t, char, _InterlockedAnd8, _InterlockedOr8, + _InterlockedXor8) +MSC_FETCHBITOP(uint8_t, char, _InterlockedAnd8, _InterlockedOr8, + _InterlockedXor8) +MSC_FETCHBITOP(int16_t, short, _InterlockedAnd16, _InterlockedOr16, + _InterlockedXor16) +MSC_FETCHBITOP(uint16_t, short, _InterlockedAnd16, _InterlockedOr16, + _InterlockedXor16) +MSC_FETCHBITOP(int32_t, long, _InterlockedAnd, _InterlockedOr, _InterlockedXor) +MSC_FETCHBITOP(uint32_t, long, _InterlockedAnd, _InterlockedOr, _InterlockedXor) + +#ifdef _M_IX86 +MSC_FETCHBITOP_CAS(int64_t) +MSC_FETCHBITOP_CAS(uint64_t) +#else +MSC_FETCHBITOP(int64_t, __int64, _InterlockedAnd64, _InterlockedOr64, + _InterlockedXor64) +MSC_FETCHBITOP(uint64_t, __int64, _InterlockedAnd64, _InterlockedOr64, + _InterlockedXor64) +#endif + +} // namespace jit +} // namespace js + +#undef MSC_FETCHBITOPX_CAS +#undef MSC_FETCHBITOPX +#undef MSC_FETCHBITOP_CAS +#undef MSC_FETCHBITOP + +template +inline T js::jit::AtomicOperations::loadSafeWhenRacy(T* addr) { + MOZ_ASSERT(tier1Constraints(addr)); + // This is also appropriate for double, int64, and uint64 on 32-bit + // platforms since there are no guarantees of access-atomicity. + return *addr; +} + +template +inline void js::jit::AtomicOperations::storeSafeWhenRacy(T* addr, T val) { + MOZ_ASSERT(tier1Constraints(addr)); + // This is also appropriate for double, int64, and uint64 on 32-bit + // platforms since there are no guarantees of access-atomicity. + *addr = val; +} + +inline void js::jit::AtomicOperations::memcpySafeWhenRacy(void* dest, + const void* src, + size_t nbytes) { + MOZ_ASSERT(!((char*)dest <= (char*)src && (char*)src < (char*)dest + nbytes)); + MOZ_ASSERT(!((char*)src <= (char*)dest && (char*)dest < (char*)src + nbytes)); + ::memcpy(dest, src, nbytes); +} + +inline void js::jit::AtomicOperations::memmoveSafeWhenRacy(void* dest, + const void* src, + size_t nbytes) { + ::memmove(dest, src, nbytes); +} + +#endif // jit_shared_AtomicOperations_x86_shared_msvc_h diff --git a/js/src/vm/Initialization.cpp b/js/src/vm/Initialization.cpp index ce7aa24305864..0067fde48885a 100644 --- a/js/src/vm/Initialization.cpp +++ b/js/src/vm/Initialization.cpp @@ -17,7 +17,6 @@ #include "builtin/AtomicsObject.h" #include "ds/MemoryProtectionExceptionHandler.h" #include "gc/Statistics.h" -#include "jit/AtomicOperations.h" #include "jit/ExecutableAllocator.h" #include "jit/Ion.h" #include "jit/JitCommon.h" @@ -128,8 +127,6 @@ JS_PUBLIC_API const char* JS::detail::InitWithFailureDiagnostic( RETURN_IF_FAIL(js::vtune::Initialize()); #endif - RETURN_IF_FAIL(js::jit::AtomicOperations::Initialize()); - #if EXPOSE_INTL_API UErrorCode err = U_ZERO_ERROR; u_init(&err); @@ -178,8 +175,6 @@ JS_PUBLIC_API void JS_ShutDown(void) { js::jit::SimulatorProcess::destroy(); #endif - js::jit::AtomicOperations::ShutDown(); - #ifdef JS_TRACE_LOGGING js::DestroyTraceLoggerThreadState(); js::DestroyTraceLoggerGraphState(); From f79e272ffde50b4397430b126f8196ba2449a321 Mon Sep 17 00:00:00 2001 From: Jon Coppeard Date: Mon, 21 Jan 2019 12:40:52 +0000 Subject: [PATCH 4/9] Bug 1520778 - Ensure implicit edges are marked on all paths through the marking code r=sfink --- js/src/gc/Marking.cpp | 35 +++++++++++++------------ js/src/jit-test/tests/gc/bug-1520778.js | 18 +++++++++++++ js/src/vm/JSObject.cpp | 4 +++ js/src/vm/JSScript.cpp | 2 +- 4 files changed, 41 insertions(+), 18 deletions(-) create mode 100644 js/src/jit-test/tests/gc/bug-1520778.js diff --git a/js/src/gc/Marking.cpp b/js/src/gc/Marking.cpp index 8e4cbbe2fe867..ccf02d8f92898 100644 --- a/js/src/gc/Marking.cpp +++ b/js/src/gc/Marking.cpp @@ -95,20 +95,20 @@ using mozilla::PodCopy; // '---------' '----------' '-----------------' // // | // // | // -// .--------. // -// o---------------->|traverse| . // -// /_\ '--------' ' . // -// | . . ' . // -// | . . ' . // -// | . . ' . // -// | .-----------. .-----------. ' . .--------------------. // -// | |markAndScan| |markAndPush| ' - |markAndTraceChildren|----> // -// | '-----------' '-----------' '--------------------' // -// | | \ // -// | | \ // -// | .----------------------. .----------------. // -// | |T::eagerlyMarkChildren| |pushMarkStackTop|<===Oo // -// | '----------------------' '----------------' || // +// .-----------. // +// o------------->|traverse(T)| . // +// /_\ '-----------' ' . // +// | . . ' . // +// | . . ' . // +// | . . ' . // +// | .--------------. .--------------. ' . .-----------------------. // +// | |markAndScan(T)| |markAndPush(T)| ' - |markAndTraceChildren(T)| // +// | '--------------' '--------------' '-----------------------' // +// | | \ | // +// | | \ | // +// | .----------------------. .----------------. .------------------. // +// | |eagerlyMarkChildren(T)| |pushMarkStackTop|<===Oo |T::traceChildren()|--> // +// | '----------------------' '----------------' || '------------------' // // | | || || // // | | || || // // | | || || // @@ -847,7 +847,6 @@ void GCMarker::traverse(JSString* thing) { template <> void GCMarker::traverse(LazyScript* thing) { markAndScan(thing); - markImplicitEdges(thing); } template <> void GCMarker::traverse(Shape* thing) { @@ -1032,7 +1031,7 @@ void LazyScript::traceChildren(JSTracer* trc) { } if (trc->isMarkingTracer()) { - return GCMarker::fromTracer(trc)->markImplicitEdges(this); + GCMarker::fromTracer(trc)->markImplicitEdges(this); } } inline void js::GCMarker::eagerlyMarkChildren(LazyScript* thing) { @@ -1068,6 +1067,8 @@ inline void js::GCMarker::eagerlyMarkChildren(LazyScript* thing) { for (auto i : IntegerRange(thing->numInnerFunctions())) { traverseEdge(thing, static_cast(innerFunctions[i])); } + + markImplicitEdges(thing); } void Shape::traceChildren(JSTracer* trc) { @@ -1101,7 +1102,7 @@ inline void js::GCMarker::eagerlyMarkChildren(Shape* shape) { traverseEdge(shape, shape->propidRef().get()); - // When triggered between slices on belhalf of a barrier, these + // When triggered between slices on behalf of a barrier, these // objects may reside in the nursery, so require an extra check. // FIXME: Bug 1157967 - remove the isTenured checks. if (shape->hasGetterObject() && shape->getterObject()->isTenured()) { diff --git a/js/src/jit-test/tests/gc/bug-1520778.js b/js/src/jit-test/tests/gc/bug-1520778.js new file mode 100644 index 0000000000000..27e98854cdb1a --- /dev/null +++ b/js/src/jit-test/tests/gc/bug-1520778.js @@ -0,0 +1,18 @@ +// |jit-test| error: ReferenceError +gczeal(0); +gcparam("markStackLimit", 1); +var g = newGlobal({ + newCompartment: true +}); +var dbg = new Debugger; +var gw = dbg.addDebuggee(g); +dbg.onDebuggerStatement = function(frame) { + frame.environment.parent.getVariable('y') +}; +g.eval(` + let y = 1; + g = function () { debugger; }; + g(); +`); +gczeal(9, 10); +f4(); diff --git a/js/src/vm/JSObject.cpp b/js/src/vm/JSObject.cpp index 6adb53dbffdfb..5ac34f886f076 100644 --- a/js/src/vm/JSObject.cpp +++ b/js/src/vm/JSObject.cpp @@ -4202,6 +4202,10 @@ void JSObject::traceChildren(JSTracer* trc) { if (clasp->hasTrace()) { clasp->doTrace(trc, this); } + + if (trc->isMarkingTracer()) { + GCMarker::fromTracer(trc)->markImplicitEdges(this); + } } static JSAtom* displayAtomFromObjectGroup(ObjectGroup& group) { diff --git a/js/src/vm/JSScript.cpp b/js/src/vm/JSScript.cpp index 82c2f79dfa250..1a7b9f5d63847 100644 --- a/js/src/vm/JSScript.cpp +++ b/js/src/vm/JSScript.cpp @@ -4411,7 +4411,7 @@ void JSScript::traceChildren(JSTracer* trc) { jit::TraceJitScripts(trc, this); if (trc->isMarkingTracer()) { - return GCMarker::fromTracer(trc)->markImplicitEdges(this); + GCMarker::fromTracer(trc)->markImplicitEdges(this); } } From 173ea8819542f3fb6fcdfd6a5085939b7ae4f901 Mon Sep 17 00:00:00 2001 From: Jon Coppeard Date: Mon, 21 Jan 2019 12:40:55 +0000 Subject: [PATCH 5/9] Bug 1518075 - Add another check for null script because compilation can 'succeed' if scripting is disabled r=smaug --- dom/base/nsGlobalWindowInner.cpp | 5 +++-- dom/base/nsJSUtils.cpp | 4 ++++ dom/base/nsJSUtils.h | 3 +++ dom/script/ScriptLoader.cpp | 2 +- 4 files changed, 11 insertions(+), 3 deletions(-) diff --git a/dom/base/nsGlobalWindowInner.cpp b/dom/base/nsGlobalWindowInner.cpp index 509fdf0ab8dc4..4e4211c131b2a 100644 --- a/dom/base/nsGlobalWindowInner.cpp +++ b/dom/base/nsGlobalWindowInner.cpp @@ -6006,10 +6006,11 @@ bool nsGlobalWindowInner::RunTimeoutHandler(Timeout* aTimeout, nsJSUtils::ExecutionContext exec(aes.cx(), global); rv = exec.Compile(options, handler->GetHandlerText()); - if (rv == NS_OK) { + JSScript* script = exec.MaybeGetScript(); + if (script) { LoadedScript* initiatingScript = handler->GetInitiatingScript(); if (initiatingScript) { - initiatingScript->AssociateWithScript(exec.GetScript()); + initiatingScript->AssociateWithScript(script); } rv = exec.ExecScript(); diff --git a/dom/base/nsJSUtils.cpp b/dom/base/nsJSUtils.cpp index d6faedd3b878d..202702a7fcf40 100644 --- a/dom/base/nsJSUtils.cpp +++ b/dom/base/nsJSUtils.cpp @@ -374,6 +374,10 @@ JSScript* nsJSUtils::ExecutionContext::GetScript() { mScriptUsed = true; #endif + return MaybeGetScript(); +} + +JSScript* nsJSUtils::ExecutionContext::MaybeGetScript() { return mScript; } diff --git a/dom/base/nsJSUtils.h b/dom/base/nsJSUtils.h index f15ea9f335f17..7b4dc8e83debb 100644 --- a/dom/base/nsJSUtils.h +++ b/dom/base/nsJSUtils.h @@ -175,6 +175,9 @@ class nsJSUtils { // Get a successfully compiled script. JSScript* GetScript(); + // Get the compiled script if present, or nullptr. + JSScript* MaybeGetScript(); + // Execute the compiled script and ignore the return value. MOZ_MUST_USE nsresult ExecScript(); diff --git a/dom/script/ScriptLoader.cpp b/dom/script/ScriptLoader.cpp index bf03a90b50225..fd7d8e5a0c24d 100644 --- a/dom/script/ScriptLoader.cpp +++ b/dom/script/ScriptLoader.cpp @@ -2445,7 +2445,7 @@ class MOZ_RAII AutoSetProcessingScriptTag { static nsresult ExecuteCompiledScript(JSContext* aCx, ScriptLoadRequest* aRequest, nsJSUtils::ExecutionContext& aExec) { - JS::Rooted script(aCx, aExec.GetScript()); + JS::Rooted script(aCx, aExec.MaybeGetScript()); if (!script) { // Compilation succeeds without producing a script if scripting is // disabled for the global. From deb24b809cc4f7e4c65baeda17950224cbe55a4e Mon Sep 17 00:00:00 2001 From: longsonr Date: Mon, 21 Jan 2019 13:08:12 +0000 Subject: [PATCH 6/9] Bug 1519253 - Move nsSMILInterval and nsSMILRepeatCount to the mozilla namespace r=birtles --HG-- rename : dom/smil/nsSMILInterval.cpp => dom/smil/SMILInterval.cpp rename : dom/smil/nsSMILInterval.h => dom/smil/SMILInterval.h rename : dom/smil/nsSMILRepeatCount.cpp => dom/smil/SMILRepeatCount.cpp rename : dom/smil/nsSMILRepeatCount.h => dom/smil/SMILRepeatCount.h --- .../{nsSMILInterval.cpp => SMILInterval.cpp} | 34 ++++++++++-------- dom/smil/{nsSMILInterval.h => SMILInterval.h} | 14 +++++--- dom/smil/SMILParserUtils.cpp | 7 ++-- dom/smil/SMILParserUtils.h | 6 ++-- ...MILRepeatCount.cpp => SMILRepeatCount.cpp} | 11 ++++-- ...{nsSMILRepeatCount.h => SMILRepeatCount.h} | 18 ++++++---- dom/smil/SMILTimedElement.cpp | 35 +++++++++---------- dom/smil/SMILTimedElement.h | 26 +++++++------- dom/smil/moz.build | 8 ++--- dom/smil/nsSMILInstanceTime.cpp | 9 ++--- dom/smil/nsSMILInstanceTime.h | 21 ++++++----- dom/smil/nsSMILTimeValue.h | 2 +- dom/smil/nsSMILTimeValueSpec.cpp | 4 +-- dom/smil/nsSMILTimeValueSpec.h | 5 +-- 14 files changed, 111 insertions(+), 89 deletions(-) rename dom/smil/{nsSMILInterval.cpp => SMILInterval.cpp} (83%) rename dom/smil/{nsSMILInterval.h => SMILInterval.h} (93%) rename dom/smil/{nsSMILRepeatCount.cpp => SMILRepeatCount.cpp} (62%) rename dom/smil/{nsSMILRepeatCount.h => SMILRepeatCount.h} (83%) diff --git a/dom/smil/nsSMILInterval.cpp b/dom/smil/SMILInterval.cpp similarity index 83% rename from dom/smil/nsSMILInterval.cpp rename to dom/smil/SMILInterval.cpp index 38ec7a9727877..a97889eceb088 100644 --- a/dom/smil/nsSMILInterval.cpp +++ b/dom/smil/SMILInterval.cpp @@ -4,11 +4,13 @@ * License, v. 2.0. If a copy of the MPL was not distributed with this * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ -#include "nsSMILInterval.h" +#include "SMILInterval.h" -nsSMILInterval::nsSMILInterval() : mBeginFixed(false), mEndFixed(false) {} +namespace mozilla { -nsSMILInterval::nsSMILInterval(const nsSMILInterval& aOther) +SMILInterval::SMILInterval() : mBeginFixed(false), mEndFixed(false) {} + +SMILInterval::SMILInterval(const SMILInterval& aOther) : mBegin(aOther.mBegin), mEnd(aOther.mEnd), mBeginFixed(false), @@ -25,13 +27,13 @@ nsSMILInterval::nsSMILInterval(const nsSMILInterval& aOther) "Attempt to copy-construct an interval with fixed endpoints"); } -nsSMILInterval::~nsSMILInterval() { +SMILInterval::~SMILInterval() { MOZ_ASSERT(mDependentTimes.IsEmpty(), "Destroying interval without disassociating dependent instance " "times. Unlink was not called"); } -void nsSMILInterval::Unlink(bool aFiltered) { +void SMILInterval::Unlink(bool aFiltered) { for (int32_t i = mDependentTimes.Length() - 1; i >= 0; --i) { if (aFiltered) { mDependentTimes[i]->HandleFilteredInterval(); @@ -50,17 +52,17 @@ void nsSMILInterval::Unlink(bool aFiltered) { mEnd = nullptr; } -nsSMILInstanceTime* nsSMILInterval::Begin() { +nsSMILInstanceTime* SMILInterval::Begin() { MOZ_ASSERT(mBegin && mEnd, "Requesting Begin() on un-initialized interval."); return mBegin; } -nsSMILInstanceTime* nsSMILInterval::End() { +nsSMILInstanceTime* SMILInterval::End() { MOZ_ASSERT(mBegin && mEnd, "Requesting End() on un-initialized interval."); return mEnd; } -void nsSMILInterval::SetBegin(nsSMILInstanceTime& aBegin) { +void SMILInterval::SetBegin(nsSMILInstanceTime& aBegin) { MOZ_ASSERT(aBegin.Time().IsDefinite(), "Attempt to set unresolved or indefinite begin time on interval"); MOZ_ASSERT(!mBeginFixed, @@ -74,7 +76,7 @@ void nsSMILInterval::SetBegin(nsSMILInstanceTime& aBegin) { mBegin = &aBegin; } -void nsSMILInterval::SetEnd(nsSMILInstanceTime& aEnd) { +void SMILInterval::SetEnd(nsSMILInstanceTime& aEnd) { MOZ_ASSERT(!mEndFixed, "Attempt to set end time but the end point is fixed"); // As with SetBegin, check we're not making an instance time dependent on // itself. @@ -84,14 +86,14 @@ void nsSMILInterval::SetEnd(nsSMILInstanceTime& aEnd) { mEnd = &aEnd; } -void nsSMILInterval::FixBegin() { +void SMILInterval::FixBegin() { MOZ_ASSERT(mBegin && mEnd, "Fixing begin point on un-initialized interval"); MOZ_ASSERT(!mBeginFixed, "Duplicate calls to FixBegin()"); mBeginFixed = true; mBegin->AddRefFixedEndpoint(); } -void nsSMILInterval::FixEnd() { +void SMILInterval::FixEnd() { MOZ_ASSERT(mBegin && mEnd, "Fixing end point on un-initialized interval"); MOZ_ASSERT(mBeginFixed, "Fixing the end of an interval without a fixed begin"); @@ -100,7 +102,7 @@ void nsSMILInterval::FixEnd() { mEnd->AddRefFixedEndpoint(); } -void nsSMILInterval::AddDependentTime(nsSMILInstanceTime& aTime) { +void SMILInterval::AddDependentTime(nsSMILInstanceTime& aTime) { RefPtr* inserted = mDependentTimes.InsertElementSorted(&aTime); if (!inserted) { @@ -108,7 +110,7 @@ void nsSMILInterval::AddDependentTime(nsSMILInstanceTime& aTime) { } } -void nsSMILInterval::RemoveDependentTime(const nsSMILInstanceTime& aTime) { +void SMILInterval::RemoveDependentTime(const nsSMILInstanceTime& aTime) { #ifdef DEBUG bool found = #endif @@ -116,11 +118,11 @@ void nsSMILInterval::RemoveDependentTime(const nsSMILInstanceTime& aTime) { MOZ_ASSERT(found, "Couldn't find instance time to delete."); } -void nsSMILInterval::GetDependentTimes(InstanceTimeList& aTimes) { +void SMILInterval::GetDependentTimes(InstanceTimeList& aTimes) { aTimes = mDependentTimes; } -bool nsSMILInterval::IsDependencyChainLink() const { +bool SMILInterval::IsDependencyChainLink() const { if (!mBegin || !mEnd) return false; // Not yet initialised so it can't be part of a chain @@ -132,3 +134,5 @@ bool nsSMILInterval::IsDependencyChainLink() const { return (mBegin->IsDependent() && mBegin->GetBaseInterval() != this) || (mEnd->IsDependent() && mEnd->GetBaseInterval() != this); } + +} // namespace mozilla diff --git a/dom/smil/nsSMILInterval.h b/dom/smil/SMILInterval.h similarity index 93% rename from dom/smil/nsSMILInterval.h rename to dom/smil/SMILInterval.h index 339cf44a44149..524a126de7b34 100644 --- a/dom/smil/nsSMILInterval.h +++ b/dom/smil/SMILInterval.h @@ -10,8 +10,10 @@ #include "nsSMILInstanceTime.h" #include "nsTArray.h" +namespace mozilla { + //---------------------------------------------------------------------- -// nsSMILInterval class +// SMILInterval class // // A structure consisting of a begin and end time. The begin time must be // resolved (i.e. not indefinite or unresolved). @@ -19,11 +21,11 @@ // For an overview of how this class is related to other SMIL time classes see // the documentation in nsSMILTimeValue.h -class nsSMILInterval { +class SMILInterval { public: - nsSMILInterval(); - nsSMILInterval(const nsSMILInterval& aOther); - ~nsSMILInterval(); + SMILInterval(); + SMILInterval(const SMILInterval& aOther); + ~SMILInterval(); void Unlink(bool aFiltered = false); const nsSMILInstanceTime* Begin() const { @@ -79,4 +81,6 @@ class nsSMILInterval { bool mEndFixed; }; +} // namespace mozilla + #endif // NS_SMILINTERVAL_H_ diff --git a/dom/smil/SMILParserUtils.cpp b/dom/smil/SMILParserUtils.cpp index cafd57251844e..a8555d4c03c37 100644 --- a/dom/smil/SMILParserUtils.cpp +++ b/dom/smil/SMILParserUtils.cpp @@ -5,15 +5,16 @@ * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ #include "SMILParserUtils.h" + +#include "mozilla/SMILKeySpline.h" +#include "mozilla/SMILRepeatCount.h" #include "mozilla/SVGContentUtils.h" #include "mozilla/TextUtils.h" -#include "SMILKeySpline.h" #include "nsISMILAttr.h" #include "nsSMILValue.h" #include "nsSMILTimeValue.h" #include "nsSMILTimeValueSpecParams.h" #include "nsSMILTypes.h" -#include "nsSMILRepeatCount.h" #include "nsContentUtils.h" #include "nsCharSeparatedTokenizer.h" @@ -551,7 +552,7 @@ bool SMILParserUtils::ParseValuesGeneric(const nsAString& aSpec, } bool SMILParserUtils::ParseRepeatCount(const nsAString& aSpec, - nsSMILRepeatCount& aResult) { + SMILRepeatCount& aResult) { const nsAString& spec = SMILParserUtils::TrimWhitespace(aSpec); if (spec.EqualsLiteral("indefinite")) { diff --git a/dom/smil/SMILParserUtils.h b/dom/smil/SMILParserUtils.h index 4178e77555327..ae9b2ecd97629 100644 --- a/dom/smil/SMILParserUtils.h +++ b/dom/smil/SMILParserUtils.h @@ -11,13 +11,13 @@ #include "nsStringFwd.h" class nsISMILAttr; -class SMILKeySpline; class nsSMILTimeValue; class nsSMILValue; -class nsSMILRepeatCount; class nsSMILTimeValueSpecParams; namespace mozilla { +class SMILKeySpline; +class SMILRepeatCount; namespace dom { class SVGAnimationElement; } // namespace dom @@ -57,7 +57,7 @@ class SMILParserUtils { GenericValueParser& aParser); static bool ParseRepeatCount(const nsAString& aSpec, - nsSMILRepeatCount& aResult); + SMILRepeatCount& aResult); static bool ParseTimeValueSpecParams(const nsAString& aSpec, nsSMILTimeValueSpecParams& aResult); diff --git a/dom/smil/nsSMILRepeatCount.cpp b/dom/smil/SMILRepeatCount.cpp similarity index 62% rename from dom/smil/nsSMILRepeatCount.cpp rename to dom/smil/SMILRepeatCount.cpp index 47754120f49b5..ff872855b2439 100644 --- a/dom/smil/nsSMILRepeatCount.cpp +++ b/dom/smil/SMILRepeatCount.cpp @@ -4,7 +4,12 @@ * License, v. 2.0. If a copy of the MPL was not distributed with this * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ -#include "nsSMILRepeatCount.h" +#include "SMILRepeatCount.h" + +namespace mozilla { + +/*static*/ const double SMILRepeatCount::kNotSet = -1.0; +/*static*/ const double SMILRepeatCount::kIndefinite = -2.0; + +} // namespace mozilla -/*static*/ const double nsSMILRepeatCount::kNotSet = -1.0; -/*static*/ const double nsSMILRepeatCount::kIndefinite = -2.0; diff --git a/dom/smil/nsSMILRepeatCount.h b/dom/smil/SMILRepeatCount.h similarity index 83% rename from dom/smil/nsSMILRepeatCount.h rename to dom/smil/SMILRepeatCount.h index 7a5cf2036cba5..7e4931252b8fc 100644 --- a/dom/smil/nsSMILRepeatCount.h +++ b/dom/smil/SMILRepeatCount.h @@ -4,14 +4,16 @@ * License, v. 2.0. If a copy of the MPL was not distributed with this * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ -#ifndef nsSMILRepeatCount_h -#define nsSMILRepeatCount_h +#ifndef SMILRepeatCount_h +#define SMILRepeatCount_h #include "nsDebug.h" #include +namespace mozilla { + //---------------------------------------------------------------------- -// nsSMILRepeatCount +// SMILRepeatCount // // A tri-state non-negative floating point number for representing the number of // times an animation repeat, i.e. the SMIL repeatCount attribute. @@ -21,10 +23,10 @@ // 2. set (with non-negative, non-zero count value) // 3. indefinite // -class nsSMILRepeatCount { +class SMILRepeatCount { public: - nsSMILRepeatCount() : mCount(kNotSet) {} - explicit nsSMILRepeatCount(double aCount) : mCount(kNotSet) { + SMILRepeatCount() : mCount(kNotSet) {} + explicit SMILRepeatCount(double aCount) : mCount(kNotSet) { SetCount(aCount); } @@ -37,7 +39,7 @@ class nsSMILRepeatCount { bool IsIndefinite() const { return mCount == kIndefinite; } bool IsSet() const { return mCount != kNotSet; } - nsSMILRepeatCount& operator=(double aCount) { + SMILRepeatCount& operator=(double aCount) { SetCount(aCount); return *this; } @@ -55,4 +57,6 @@ class nsSMILRepeatCount { double mCount; }; +} // namespace mozilla + #endif diff --git a/dom/smil/SMILTimedElement.cpp b/dom/smil/SMILTimedElement.cpp index e3c80e5f02260..65b564b50e568 100644 --- a/dom/smil/SMILTimedElement.cpp +++ b/dom/smil/SMILTimedElement.cpp @@ -548,14 +548,14 @@ void SMILTimedElement::DoSampleAt(nsSMILTime aContainerTime, bool aEndOnly) { switch (mElementState) { case STATE_STARTUP: { - nsSMILInterval firstInterval; + SMILInterval firstInterval; mElementState = GetNextInterval(nullptr, nullptr, nullptr, firstInterval) ? STATE_WAITING : STATE_POSTACTIVE; stateChanged = true; if (mElementState == STATE_WAITING) { - mCurrentInterval = MakeUnique(firstInterval); + mCurrentInterval = MakeUnique(firstInterval); NotifyNewInterval(); } } break; @@ -592,7 +592,7 @@ void SMILTimedElement::DoSampleAt(nsSMILTime aContainerTime, bool aEndOnly) { bool didApplyEarlyEnd = ApplyEarlyEnd(sampleTime); if (mCurrentInterval->End()->Time() <= sampleTime) { - nsSMILInterval newInterval; + SMILInterval newInterval; mElementState = GetNextInterval(mCurrentInterval.get(), nullptr, nullptr, newInterval) ? STATE_WAITING @@ -608,7 +608,7 @@ void SMILTimedElement::DoSampleAt(nsSMILTime aContainerTime, bool aEndOnly) { mOldIntervals.AppendElement(std::move(mCurrentInterval)); SampleFillValue(); if (mElementState == STATE_WAITING) { - mCurrentInterval = MakeUnique(newInterval); + mCurrentInterval = MakeUnique(newInterval); } // We are now in a consistent state to dispatch notifications if (didApplyEarlyEnd) { @@ -950,7 +950,7 @@ nsresult SMILTimedElement::SetRepeatCount(const nsAString& aRepeatCountSpec) { // Update the current interval before returning AutoIntervalUpdater updater(*this); - nsSMILRepeatCount newRepeatCount; + SMILRepeatCount newRepeatCount; if (SMILParserUtils::ParseRepeatCount(aRepeatCountSpec, newRepeatCount)) { mRepeatCount = newRepeatCount; @@ -1354,7 +1354,7 @@ void SMILTimedElement::DoPostSeek() { } void SMILTimedElement::UnpreserveInstanceTimes(InstanceTimeList& aList) { - const nsSMILInterval* prevInterval = GetPreviousInterval(); + const SMILInterval* prevInterval = GetPreviousInterval(); const nsSMILInstanceTime* cutoff = mCurrentInterval ? mCurrentInterval->Begin() : prevInterval ? prevInterval->Begin() : nullptr; @@ -1409,7 +1409,7 @@ void SMILTimedElement::FilterIntervals() { : 0; IntervalList filteredList; for (uint32_t i = 0; i < mOldIntervals.Length(); ++i) { - nsSMILInterval* interval = mOldIntervals[i].get(); + SMILInterval* interval = mOldIntervals[i].get(); if (i != 0 && /*skip first interval*/ i + 1 < mOldIntervals.Length() && /*skip previous interval*/ (i < threshold || !interval->IsDependencyChainLink())) { @@ -1477,7 +1477,7 @@ void SMILTimedElement::FilterInstanceTimes(InstanceTimeList& aList) { if (mCurrentInterval) { timesToKeep.AppendElement(mCurrentInterval->Begin()); } - const nsSMILInterval* prevInterval = GetPreviousInterval(); + const SMILInterval* prevInterval = GetPreviousInterval(); if (prevInterval) { timesToKeep.AppendElement(prevInterval->End()); } @@ -1496,9 +1496,8 @@ void SMILTimedElement::FilterInstanceTimes(InstanceTimeList& aList) { // http://www.w3.org/TR/2001/REC-smil-animation-20010904/#Timing-BeginEnd-LC-Start // bool SMILTimedElement::GetNextInterval( - const nsSMILInterval* aPrevInterval, - const nsSMILInterval* aReplacedInterval, - const nsSMILInstanceTime* aFixedBeginTime, nsSMILInterval& aResult) const { + const SMILInterval* aPrevInterval, const SMILInterval* aReplacedInterval, + const nsSMILInstanceTime* aFixedBeginTime, SMILInterval& aResult) const { MOZ_ASSERT(!aFixedBeginTime || aFixedBeginTime->Time().IsDefinite(), "Unresolved or indefinite begin time given for interval start"); static const nsSMILTimeValue zeroTime(0L); @@ -1860,13 +1859,13 @@ void SMILTimedElement::UpdateCurrentInterval(bool aForceChangeNotice) { // If the interval is active the begin time is fixed. const nsSMILInstanceTime* beginTime = mElementState == STATE_ACTIVE ? mCurrentInterval->Begin() : nullptr; - nsSMILInterval updatedInterval; + SMILInterval updatedInterval; if (GetNextInterval(GetPreviousInterval(), mCurrentInterval.get(), beginTime, updatedInterval)) { if (mElementState == STATE_POSTACTIVE) { MOZ_ASSERT(!mCurrentInterval, "In postactive state but the interval has been set"); - mCurrentInterval = MakeUnique(updatedInterval); + mCurrentInterval = MakeUnique(updatedInterval); mElementState = STATE_WAITING; NotifyNewInterval(); @@ -1930,7 +1929,7 @@ void SMILTimedElement::SampleFillValue() { nsSMILTime activeTime; if (mElementState == STATE_WAITING || mElementState == STATE_POSTACTIVE) { - const nsSMILInterval* prevInterval = GetPreviousInterval(); + const SMILInterval* prevInterval = GetPreviousInterval(); MOZ_ASSERT(prevInterval, "Attempting to sample fill value but there is no previous " "interval"); @@ -2091,7 +2090,7 @@ void SMILTimedElement::NotifyNewInterval() { } for (auto iter = mTimeDependents.Iter(); !iter.Done(); iter.Next()) { - nsSMILInterval* interval = mCurrentInterval.get(); + SMILInterval* interval = mCurrentInterval.get(); // It's possible that in notifying one new time dependent of a new interval // that a chain reaction is triggered which results in the original // interval disappearing. If that's the case we can skip sending further @@ -2104,7 +2103,7 @@ void SMILTimedElement::NotifyNewInterval() { } } -void SMILTimedElement::NotifyChangedInterval(nsSMILInterval* aInterval, +void SMILTimedElement::NotifyChangedInterval(SMILInterval* aInterval, bool aBeginObjectChanged, bool aEndObjectChanged) { MOZ_ASSERT(aInterval, "Null interval for change notification"); @@ -2144,14 +2143,14 @@ const nsSMILInstanceTime* SMILTimedElement::GetEffectiveBeginInstance() const { case STATE_WAITING: case STATE_POSTACTIVE: { - const nsSMILInterval* prevInterval = GetPreviousInterval(); + const SMILInterval* prevInterval = GetPreviousInterval(); return prevInterval ? prevInterval->Begin() : nullptr; } } MOZ_CRASH("Invalid element state"); } -const nsSMILInterval* SMILTimedElement::GetPreviousInterval() const { +const SMILInterval* SMILTimedElement::GetPreviousInterval() const { return mOldIntervals.IsEmpty() ? nullptr : mOldIntervals[mOldIntervals.Length() - 1].get(); diff --git a/dom/smil/SMILTimedElement.h b/dom/smil/SMILTimedElement.h index f03123750f8b6..3b30dbd5cb08f 100644 --- a/dom/smil/SMILTimedElement.h +++ b/dom/smil/SMILTimedElement.h @@ -10,11 +10,11 @@ #include "mozilla/EventForwards.h" #include "mozilla/Move.h" #include "mozilla/SMILMilestone.h" +#include "mozilla/SMILInterval.h" +#include "mozilla/SMILRepeatCount.h" #include "mozilla/UniquePtr.h" -#include "nsSMILInterval.h" #include "nsSMILInstanceTime.h" #include "nsSMILTimeValueSpec.h" -#include "nsSMILRepeatCount.h" #include "nsSMILTypes.h" #include "nsTArray.h" #include "nsTHashtable.h" @@ -348,7 +348,7 @@ class SMILTimedElement { // Typedefs typedef nsTArray> TimeValueSpecList; typedef nsTArray> InstanceTimeList; - typedef nsTArray> IntervalList; + typedef nsTArray> IntervalList; typedef nsPtrHashKey TimeValueSpecPtrKey; typedef nsTHashtable TimeValueSpecHashSet; @@ -459,13 +459,13 @@ class SMILTimedElement { /** * Helper function to iterate through this element's accumulated timing - * information (specifically old nsSMILIntervals and nsSMILTimeInstanceTimes) + * information (specifically old SMILIntervals and nsSMILTimeInstanceTimes) * and discard items that are no longer needed or exceed some threshold of * accumulated state. */ void FilterHistory(); - // Helper functions for FilterHistory to clear old nsSMILIntervals and + // Helper functions for FilterHistory to clear old SMILIntervals and // nsSMILInstanceTimes respectively. void FilterIntervals(); void FilterInstanceTimes(InstanceTimeList& aList); @@ -492,10 +492,10 @@ class SMILTimedElement { * returned). * @return true if a suitable interval was found, false otherwise. */ - bool GetNextInterval(const nsSMILInterval* aPrevInterval, - const nsSMILInterval* aReplacedInterval, + bool GetNextInterval(const SMILInterval* aPrevInterval, + const SMILInterval* aReplacedInterval, const nsSMILInstanceTime* aFixedBeginTime, - nsSMILInterval& aResult) const; + SMILInterval& aResult) const; nsSMILInstanceTime* GetNextGreater(const InstanceTimeList& aList, const nsSMILTimeValue& aBase, int32_t& aPosition) const; @@ -525,12 +525,12 @@ class SMILTimedElement { // (ii) after calling these methods we must assume that the state of the // element may have changed. void NotifyNewInterval(); - void NotifyChangedInterval(nsSMILInterval* aInterval, - bool aBeginObjectChanged, bool aEndObjectChanged); + void NotifyChangedInterval(SMILInterval* aInterval, bool aBeginObjectChanged, + bool aEndObjectChanged); void FireTimeEventAsync(EventMessage aMsg, int32_t aDetail); const nsSMILInstanceTime* GetEffectiveBeginInstance() const; - const nsSMILInterval* GetPreviousInterval() const; + const SMILInterval* GetPreviousInterval() const; bool HasPlayed() const { return !mOldIntervals.IsEmpty(); } bool HasClientInFillRange() const; bool EndHasEventConditions() const; @@ -557,7 +557,7 @@ class SMILTimedElement { nsSMILTimeValue mSimpleDur; - nsSMILRepeatCount mRepeatCount; + SMILRepeatCount mRepeatCount; nsSMILTimeValue mRepeatDur; nsSMILTimeValue mMin; @@ -580,7 +580,7 @@ class SMILTimedElement { uint32_t mInstanceSerialIndex; SMILAnimationFunction* mClient; - UniquePtr mCurrentInterval; + UniquePtr mCurrentInterval; IntervalList mOldIntervals; uint32_t mCurrentRepeatIteration; SMILMilestone mPrevRegisteredMilestone; diff --git a/dom/smil/moz.build b/dom/smil/moz.build index d2ad95474b3ef..98b75aab3e145 100644 --- a/dom/smil/moz.build +++ b/dom/smil/moz.build @@ -12,8 +12,6 @@ MOCHITEST_MANIFESTS += ['test/mochitest.ini'] EXPORTS += [ 'nsISMILAttr.h', 'nsSMILInstanceTime.h', - 'nsSMILInterval.h', - 'nsSMILRepeatCount.h', 'nsSMILTimeValue.h', 'nsSMILTimeValueSpec.h', 'nsSMILTimeValueSpecParams.h', @@ -26,10 +24,12 @@ EXPORTS.mozilla += [ 'SMILAnimationFunction.h', 'SMILCompositorTable.h', 'SMILCSSValueType.h', + 'SMILInterval.h', 'SMILKeySpline.h', 'SMILMilestone.h', 'SMILNullType.h', 'SMILParserUtils.h', + 'SMILRepeatCount.h', 'SMILSetAnimationFunction.h', 'SMILTargetIdentifier.h', 'SMILTimeContainer.h', @@ -43,8 +43,6 @@ EXPORTS.mozilla.dom += [ UNIFIED_SOURCES += [ 'nsSMILInstanceTime.cpp', - 'nsSMILInterval.cpp', - 'nsSMILRepeatCount.cpp', 'nsSMILTimeValue.cpp', 'nsSMILTimeValueSpec.cpp', 'nsSMILValue.cpp', @@ -57,9 +55,11 @@ UNIFIED_SOURCES += [ 'SMILEnumType.cpp', 'SMILFloatType.cpp', 'SMILIntegerType.cpp', + 'SMILInterval.cpp', 'SMILKeySpline.cpp', 'SMILNullType.cpp', 'SMILParserUtils.cpp', + 'SMILRepeatCount.cpp', 'SMILSetAnimationFunction.cpp', 'SMILStringType.cpp', 'SMILTimeContainer.cpp', diff --git a/dom/smil/nsSMILInstanceTime.cpp b/dom/smil/nsSMILInstanceTime.cpp index 6b2e603e81d99..a50786dfedab6 100644 --- a/dom/smil/nsSMILInstanceTime.cpp +++ b/dom/smil/nsSMILInstanceTime.cpp @@ -5,9 +5,10 @@ * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ #include "nsSMILInstanceTime.h" -#include "nsSMILInterval.h" -#include "nsSMILTimeValueSpec.h" + #include "mozilla/AutoRestore.h" +#include "mozilla/SMILInterval.h" +#include "nsSMILTimeValueSpec.h" //---------------------------------------------------------------------- // Implementation @@ -15,7 +16,7 @@ nsSMILInstanceTime::nsSMILInstanceTime(const nsSMILTimeValue& aTime, nsSMILInstanceTimeSource aSource, nsSMILTimeValueSpec* aCreator, - nsSMILInterval* aBaseInterval) + SMILInterval* aBaseInterval) : mTime(aTime), mFlags(0), mVisited(false), @@ -165,7 +166,7 @@ const nsSMILInstanceTime* nsSMILInstanceTime::GetBaseTime() const { : mBaseInterval->End(); } -void nsSMILInstanceTime::SetBaseInterval(nsSMILInterval* aBaseInterval) { +void nsSMILInstanceTime::SetBaseInterval(SMILInterval* aBaseInterval) { MOZ_ASSERT(!mBaseInterval, "Attempting to reassociate an instance time with a different " "interval."); diff --git a/dom/smil/nsSMILInstanceTime.h b/dom/smil/nsSMILInstanceTime.h index ef8f8f60d62b9..2686ef323c0ff 100644 --- a/dom/smil/nsSMILInstanceTime.h +++ b/dom/smil/nsSMILInstanceTime.h @@ -10,10 +10,10 @@ #include "nsISupportsImpl.h" #include "nsSMILTimeValue.h" -class nsSMILInterval; class nsSMILTimeValueSpec; namespace mozilla { +class SMILInterval; class SMILTimeContainer; } @@ -29,15 +29,18 @@ class SMILTimeContainer; // These objects are owned by an SMILTimedElement but MAY also be referenced // by: // -// a) nsSMILIntervals that belong to the same SMILTimedElement and which refer +// a) SMILIntervals that belong to the same SMILTimedElement and which refer // to the nsSMILInstanceTimes which form the interval endpoints; and/or -// b) nsSMILIntervals that belong to other SMILTimedElements but which need to +// b) SMILIntervals that belong to other SMILTimedElements but which need to // update dependent instance times when they change or are deleted. // E.g. for begin='a.begin', 'a' needs to inform dependent // nsSMILInstanceTimes if its begin time changes. This notification is -// performed by the nsSMILInterval. +// performed by the SMILInterval. class nsSMILInstanceTime final { + typedef mozilla::SMILInterval SMILInterval; + typedef mozilla::SMILTimeContainer SMILTimeContainer; + public: // Instance time source. Times generated by events, syncbase relationships, // and DOM calls behave differently in some circumstances such as when a timed @@ -56,10 +59,10 @@ class nsSMILInstanceTime final { explicit nsSMILInstanceTime(const nsSMILTimeValue& aTime, nsSMILInstanceTimeSource aSource = SOURCE_NONE, nsSMILTimeValueSpec* aCreator = nullptr, - nsSMILInterval* aBaseInterval = nullptr); + SMILInterval* aBaseInterval = nullptr); void Unlink(); - void HandleChangedInterval(const mozilla::SMILTimeContainer* aSrcContainer, + void HandleChangedInterval(const SMILTimeContainer* aSrcContainer, bool aBeginObjectChanged, bool aEndObjectChanged); void HandleDeletedInterval(); void HandleFilteredInterval(); @@ -85,7 +88,7 @@ class nsSMILInstanceTime final { bool IsDependent() const { return !!mBaseInterval; } bool IsDependentOn(const nsSMILInstanceTime& aOther) const; - const nsSMILInterval* GetBaseInterval() const { return mBaseInterval; } + const SMILInterval* GetBaseInterval() const { return mBaseInterval; } const nsSMILInstanceTime* GetBaseTime() const; bool SameTimeAndBase(const nsSMILInstanceTime& aOther) const { @@ -103,7 +106,7 @@ class nsSMILInstanceTime final { // Private destructor, to discourage deletion outside of Release(): ~nsSMILInstanceTime(); - void SetBaseInterval(nsSMILInterval* aBaseInterval); + void SetBaseInterval(SMILInterval* aBaseInterval); nsSMILTimeValue mTime; @@ -159,7 +162,7 @@ class nsSMILInstanceTime final { nsSMILTimeValueSpec* mCreator; // The nsSMILTimeValueSpec object that created // us. (currently only needed for syncbase // instance times.) - nsSMILInterval* mBaseInterval; // Interval from which this time is derived + SMILInterval* mBaseInterval; // Interval from which this time is derived // (only used for syncbase instance times) }; diff --git a/dom/smil/nsSMILTimeValue.h b/dom/smil/nsSMILTimeValue.h index aebc793ed3f63..888c2be90cc84 100644 --- a/dom/smil/nsSMILTimeValue.h +++ b/dom/smil/nsSMILTimeValue.h @@ -23,7 +23,7 @@ * nsSMILInstanceTime -- an nsSMILTimeValue used for constructing intervals. It * contains additional fields to govern reset behavior * and track timing dependencies (e.g. syncbase timing). - * nsSMILInterval -- a pair of nsSMILInstanceTimes that defines a begin and + * SMILInterval -- a pair of nsSMILInstanceTimes that defines a begin and * an end time for animation. * nsSMILTimeValueSpec -- a component of a begin or end attribute, such as the * '5s' or 'a.end+2m' in begin="5s; a.end+2m". Acts as diff --git a/dom/smil/nsSMILTimeValueSpec.cpp b/dom/smil/nsSMILTimeValueSpec.cpp index 6d4e971da31c5..ef738996dd718 100644 --- a/dom/smil/nsSMILTimeValueSpec.cpp +++ b/dom/smil/nsSMILTimeValueSpec.cpp @@ -5,6 +5,7 @@ * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ #include "mozilla/EventListenerManager.h" +#include "mozilla/SMILInterval.h" #include "mozilla/SMILParserUtils.h" #include "mozilla/SMILTimeContainer.h" #include "mozilla/SMILTimedElement.h" @@ -12,7 +13,6 @@ #include "mozilla/dom/SVGAnimationElement.h" #include "mozilla/dom/TimeEvent.h" #include "nsSMILTimeValueSpec.h" -#include "nsSMILInterval.h" #include "nsSMILTimeValue.h" #include "nsSMILInstanceTime.h" #include "nsString.h" @@ -108,7 +108,7 @@ bool nsSMILTimeValueSpec::IsEventBased() const { } void nsSMILTimeValueSpec::HandleNewInterval( - nsSMILInterval& aInterval, const SMILTimeContainer* aSrcContainer) { + SMILInterval& aInterval, const SMILTimeContainer* aSrcContainer) { const nsSMILInstanceTime& baseInstance = mParams.mSyncBegin ? *aInterval.Begin() : *aInterval.End(); nsSMILTimeValue newTime = diff --git a/dom/smil/nsSMILTimeValueSpec.h b/dom/smil/nsSMILTimeValueSpec.h index 4c4e8ef7bcb51..985654e91bea5 100644 --- a/dom/smil/nsSMILTimeValueSpec.h +++ b/dom/smil/nsSMILTimeValueSpec.h @@ -15,9 +15,9 @@ class nsSMILTimeValue; class nsSMILInstanceTime; -class nsSMILInterval; namespace mozilla { +class SMILInterval; class SMILTimeContainer; class SMILTimedElement; namespace dom { @@ -40,6 +40,7 @@ class EventListenerManager; class nsSMILTimeValueSpec { public: + typedef mozilla::SMILInterval SMILInterval; typedef mozilla::SMILTimeContainer SMILTimeContainer; typedef mozilla::SMILTimedElement SMILTimedElement; typedef mozilla::dom::Element Element; @@ -53,7 +54,7 @@ class nsSMILTimeValueSpec { void ResolveReferences(Element& aContextElement); bool IsEventBased() const; - void HandleNewInterval(nsSMILInterval& aInterval, + void HandleNewInterval(SMILInterval& aInterval, const SMILTimeContainer* aSrcContainer); void HandleTargetElementChange(Element* aNewTarget); From a96229ddb35ce37fcf76056dd58a3cd00f29d6b6 Mon Sep 17 00:00:00 2001 From: Jon Coppeard Date: Mon, 21 Jan 2019 13:09:12 +0000 Subject: [PATCH 7/9] Bug 1512749 - Convert JS::gcreason::Reason to enum class JS:GCReason r=jonco r=mccr8 --- docshell/base/nsDocShell.cpp | 2 +- dom/base/FuzzingFunctions.cpp | 2 +- dom/base/nsDOMWindowUtils.cpp | 4 +- dom/base/nsGlobalWindowOuter.cpp | 2 +- dom/base/nsJSEnvironment.cpp | 32 ++- dom/base/nsJSEnvironment.h | 9 +- dom/indexedDB/ActorsChild.cpp | 2 +- dom/ipc/ContentChild.cpp | 2 +- dom/workers/WorkerPrivate.cpp | 4 +- js/ipc/WrapperAnswer.cpp | 2 +- js/public/GCAPI.h | 30 +-- js/src/builtin/TestingFunctions.cpp | 12 +- js/src/fuzz-tests/testBinASTReader.cpp | 2 +- js/src/fuzz-tests/testExample.cpp | 2 +- .../fuzz-tests/testStructuredCloneReader.cpp | 2 +- js/src/gc/Allocator.cpp | 8 +- js/src/gc/ArenaList.h | 2 +- js/src/gc/GC.cpp | 215 +++++++++--------- js/src/gc/GCInternals.h | 6 +- js/src/gc/GCRuntime.h | 56 +++-- js/src/gc/Nursery.cpp | 35 ++- js/src/gc/Nursery.h | 21 +- js/src/gc/Scheduling.h | 4 +- js/src/gc/Statistics.cpp | 30 +-- js/src/gc/Statistics.h | 12 +- js/src/gc/StoreBuffer.cpp | 4 +- js/src/gc/StoreBuffer.h | 10 +- js/src/gc/Zone.cpp | 2 +- js/src/jsapi-tests/testBinASTReader.cpp | 2 +- js/src/jsapi-tests/testErrorInterceptorGC.cpp | 2 +- js/src/jsapi-tests/testGCFinalizeCallback.cpp | 16 +- js/src/jsapi-tests/testGCGrayMarking.cpp | 6 +- js/src/jsapi-tests/testGCHeapPostBarriers.cpp | 6 +- js/src/jsapi-tests/testGCHooks.cpp | 13 +- js/src/jsapi-tests/testGCMarking.cpp | 2 +- js/src/jsapi-tests/testGCUniqueId.cpp | 2 +- js/src/jsapi-tests/testGCWeakCache.cpp | 2 +- js/src/jsapi-tests/testGCWeakRef.cpp | 2 +- js/src/jsapi-tests/testPreserveJitCode.cpp | 4 +- js/src/jsapi-tests/tests.h | 2 +- js/src/jsapi.cpp | 4 +- js/src/jsfriendapi.cpp | 2 +- js/src/shell/js.cpp | 2 +- js/src/vm/ArrayBufferObject.cpp | 4 +- js/src/vm/Debugger.cpp | 2 +- js/src/vm/JSContext-inl.h | 2 +- js/src/vm/JSContext.h | 2 +- js/src/vm/Runtime.cpp | 2 +- js/src/vm/Shape-inl.h | 2 +- js/xpconnect/src/XPCComponents.cpp | 6 +- js/xpconnect/src/nsXPConnect.cpp | 4 +- layout/base/nsDocumentViewer.cpp | 6 +- parser/html/nsHtml5StreamParser.cpp | 2 +- .../mozparsers/parse_histograms.py | 2 +- .../telemetry/core/TelemetryHistogram.cpp | 6 +- .../python/test_histogramtools_non_strict.py | 2 +- xpcom/base/CycleCollectedJSRuntime.cpp | 25 +- xpcom/base/CycleCollectedJSRuntime.h | 4 +- xpcom/base/nsCycleCollector.cpp | 6 +- 59 files changed, 315 insertions(+), 343 deletions(-) diff --git a/docshell/base/nsDocShell.cpp b/docshell/base/nsDocShell.cpp index 6f8ad8df2d248..634cde0e6a6b3 100644 --- a/docshell/base/nsDocShell.cpp +++ b/docshell/base/nsDocShell.cpp @@ -10381,7 +10381,7 @@ nsresult nsDocShell::DoChannelLoad(nsIChannel* aChannel, // We're about to load a new page and it may take time before necko // gives back any data, so main thread might have a chance to process a // collector slice - nsJSContext::MaybeRunNextCollectorSlice(this, JS::gcreason::DOCSHELL); + nsJSContext::MaybeRunNextCollectorSlice(this, JS::GCReason::DOCSHELL); // Success. Keep the initial ClientSource if it exists. cleanupInitialClient.release(); diff --git a/dom/base/FuzzingFunctions.cpp b/dom/base/FuzzingFunctions.cpp index 68f4963afede9..8be9fca48734e 100644 --- a/dom/base/FuzzingFunctions.cpp +++ b/dom/base/FuzzingFunctions.cpp @@ -22,7 +22,7 @@ namespace mozilla { namespace dom { /* static */ void FuzzingFunctions::GarbageCollect(const GlobalObject&) { - nsJSContext::GarbageCollectNow(JS::gcreason::COMPONENT_UTILS, + nsJSContext::GarbageCollectNow(JS::GCReason::COMPONENT_UTILS, nsJSContext::NonIncrementalGC, nsJSContext::NonShrinkingGC); } diff --git a/dom/base/nsDOMWindowUtils.cpp b/dom/base/nsDOMWindowUtils.cpp index 20d8b39c3d870..c0ccd593c89be 100644 --- a/dom/base/nsDOMWindowUtils.cpp +++ b/dom/base/nsDOMWindowUtils.cpp @@ -1037,7 +1037,7 @@ NS_IMETHODIMP nsDOMWindowUtils::GarbageCollect(nsICycleCollectorListener* aListener) { AUTO_PROFILER_LABEL("nsDOMWindowUtils::GarbageCollect", GCCC); - nsJSContext::GarbageCollectNow(JS::gcreason::DOM_UTILS); + nsJSContext::GarbageCollectNow(JS::GCReason::DOM_UTILS); nsJSContext::CycleCollectNow(aListener); return NS_OK; @@ -1051,7 +1051,7 @@ nsDOMWindowUtils::CycleCollect(nsICycleCollectorListener* aListener) { NS_IMETHODIMP nsDOMWindowUtils::RunNextCollectorTimer() { - nsJSContext::RunNextCollectorTimer(JS::gcreason::DOM_WINDOW_UTILS); + nsJSContext::RunNextCollectorTimer(JS::GCReason::DOM_WINDOW_UTILS); return NS_OK; } diff --git a/dom/base/nsGlobalWindowOuter.cpp b/dom/base/nsGlobalWindowOuter.cpp index e5a6dfbd935a0..956bb7bc6ceac 100644 --- a/dom/base/nsGlobalWindowOuter.cpp +++ b/dom/base/nsGlobalWindowOuter.cpp @@ -2196,7 +2196,7 @@ void nsGlobalWindowOuter::DetachFromDocShell() { // When we're about to destroy a top level content window // (for example a tab), we trigger a full GC by passing null as the last // param. We also trigger a full GC for chrome windows. - nsJSContext::PokeGC(JS::gcreason::SET_DOC_SHELL, + nsJSContext::PokeGC(JS::GCReason::SET_DOC_SHELL, (mTopLevelOuterContentWindow || mIsChrome) ? nullptr : GetWrapperPreserveColor()); diff --git a/dom/base/nsJSEnvironment.cpp b/dom/base/nsJSEnvironment.cpp index ed009e1e6bcb6..5e7dce4486c0f 100644 --- a/dom/base/nsJSEnvironment.cpp +++ b/dom/base/nsJSEnvironment.cpp @@ -324,12 +324,12 @@ nsJSEnvironmentObserver::Observe(nsISupports* aSubject, const char* aTopic, // slow and it likely won't help us anyway. return NS_OK; } - nsJSContext::GarbageCollectNow(JS::gcreason::MEM_PRESSURE, + nsJSContext::GarbageCollectNow(JS::GCReason::MEM_PRESSURE, nsJSContext::NonIncrementalGC, nsJSContext::ShrinkingGC); nsJSContext::CycleCollectNow(); if (NeedsGCAfterCC()) { - nsJSContext::GarbageCollectNow(JS::gcreason::MEM_PRESSURE, + nsJSContext::GarbageCollectNow(JS::GCReason::MEM_PRESSURE, nsJSContext::NonIncrementalGC, nsJSContext::ShrinkingGC); } @@ -577,7 +577,7 @@ nsJSContext::~nsJSContext() { void nsJSContext::Destroy() { if (mGCOnDestruction) { - PokeGC(JS::gcreason::NSJSCONTEXT_DESTROY, mWindowProxy); + PokeGC(JS::GCReason::NSJSCONTEXT_DESTROY, mWindowProxy); } DropJSObjects(this); @@ -1079,17 +1079,17 @@ void nsJSContext::SetProcessingScriptTag(bool aFlag) { void FullGCTimerFired(nsITimer* aTimer, void* aClosure) { nsJSContext::KillFullGCTimer(); MOZ_ASSERT(!aClosure, "Don't pass a closure to FullGCTimerFired"); - nsJSContext::GarbageCollectNow(JS::gcreason::FULL_GC_TIMER, + nsJSContext::GarbageCollectNow(JS::GCReason::FULL_GC_TIMER, nsJSContext::IncrementalGC); } // static -void nsJSContext::GarbageCollectNow(JS::gcreason::Reason aReason, +void nsJSContext::GarbageCollectNow(JS::GCReason aReason, IsIncremental aIncremental, IsShrinking aShrinking, int64_t aSliceMillis) { AUTO_PROFILER_LABEL_DYNAMIC_CSTR("nsJSContext::GarbageCollectNow", GCCC, - JS::gcreason::ExplainReason(aReason)); + JS::ExplainGCReason(aReason)); MOZ_ASSERT_IF(aSliceMillis, aIncremental == IncrementalGC); @@ -1113,7 +1113,7 @@ void nsJSContext::GarbageCollectNow(JS::gcreason::Reason aReason, JSGCInvocationKind gckind = aShrinking == ShrinkingGC ? GC_SHRINK : GC_NORMAL; if (aIncremental == NonIncrementalGC || - aReason == JS::gcreason::FULL_GC_TIMER) { + aReason == JS::GCReason::FULL_GC_TIMER) { sNeedsFullGC = true; } @@ -1139,7 +1139,7 @@ static void FinishAnyIncrementalGC() { // We're in the middle of an incremental GC, so finish it. JS::PrepareForIncrementalGC(jsapi.cx()); - JS::FinishIncrementalGC(jsapi.cx(), JS::gcreason::CC_FORCED); + JS::FinishIncrementalGC(jsapi.cx(), JS::GCReason::CC_FORCED); } } @@ -1573,7 +1573,7 @@ void nsJSContext::EndCycleCollectionCallback(CycleCollectorResults& aResults) { uint32_t ccNowDuration = TimeBetween(gCCStats.mBeginTime, endCCTimeStamp); if (NeedsGCAfterCC()) { - PokeGC(JS::gcreason::CC_WAITING, nullptr, + PokeGC(JS::GCReason::CC_WAITING, nullptr, NS_GC_DELAY - std::min(ccNowDuration, kMaxICCDuration)); } @@ -1730,8 +1730,7 @@ bool InterSliceGCRunnerFired(TimeStamp aDeadline, void* aData) { TimeDuration duration = sGCUnnotifiedTotalTime; uintptr_t reason = reinterpret_cast(aData); nsJSContext::GarbageCollectNow( - aData ? static_cast(reason) - : JS::gcreason::INTER_SLICE_GC, + aData ? static_cast(reason) : JS::GCReason::INTER_SLICE_GC, nsJSContext::IncrementalGC, nsJSContext::NonShrinkingGC, budget); sGCUnnotifiedTotalTime = TimeDuration(); @@ -1780,7 +1779,7 @@ void GCTimerFired(nsITimer* aTimer, void* aClosure) { void ShrinkingGCTimerFired(nsITimer* aTimer, void* aClosure) { nsJSContext::KillShrinkingGCTimer(); sIsCompactingOnUserInactive = true; - nsJSContext::GarbageCollectNow(JS::gcreason::USER_INACTIVE, + nsJSContext::GarbageCollectNow(JS::GCReason::USER_INACTIVE, nsJSContext::IncrementalGC, nsJSContext::ShrinkingGC); } @@ -1886,7 +1885,7 @@ uint32_t nsJSContext::CleanupsSinceLastGC() { return sCleanupsSinceLastGC; } // collection we run on a long timer. // static -void nsJSContext::RunNextCollectorTimer(JS::gcreason::Reason aReason, +void nsJSContext::RunNextCollectorTimer(JS::GCReason aReason, mozilla::TimeStamp aDeadline) { if (sShuttingDown) { return; @@ -1925,7 +1924,7 @@ void nsJSContext::RunNextCollectorTimer(JS::gcreason::Reason aReason, // static void nsJSContext::MaybeRunNextCollectorSlice(nsIDocShell* aDocShell, - JS::gcreason::Reason aReason) { + JS::GCReason aReason) { if (!aDocShell || !XRE_IsContentProcess()) { return; } @@ -1972,8 +1971,7 @@ void nsJSContext::MaybeRunNextCollectorSlice(nsIDocShell* aDocShell, } // static -void nsJSContext::PokeGC(JS::gcreason::Reason aReason, JSObject* aObj, - int aDelay) { +void nsJSContext::PokeGC(JS::GCReason aReason, JSObject* aObj, int aDelay) { if (sShuttingDown) { return; } @@ -1981,7 +1979,7 @@ void nsJSContext::PokeGC(JS::gcreason::Reason aReason, JSObject* aObj, if (aObj) { JS::Zone* zone = JS::GetObjectZone(aObj); CycleCollectedJSRuntime::Get()->AddZoneWaitingForGC(zone); - } else if (aReason != JS::gcreason::CC_WAITING) { + } else if (aReason != JS::GCReason::CC_WAITING) { sNeedsFullGC = true; } diff --git a/dom/base/nsJSEnvironment.h b/dom/base/nsJSEnvironment.h index 73eec28c5e16d..d5f005eb2083c 100644 --- a/dom/base/nsJSEnvironment.h +++ b/dom/base/nsJSEnvironment.h @@ -72,7 +72,7 @@ class nsJSContext : public nsIScriptContext { // Setup all the statics etc - safe to call multiple times after Startup(). static void EnsureStatics(); - static void GarbageCollectNow(JS::gcreason::Reason reason, + static void GarbageCollectNow(JS::GCReason reason, IsIncremental aIncremental = NonIncrementalGC, IsShrinking aShrinking = NonShrinkingGC, int64_t aSliceMillis = 0); @@ -97,17 +97,16 @@ class nsJSContext : public nsIScriptContext { // If there is some pending CC or GC timer/runner, this will run it. static void RunNextCollectorTimer( - JS::gcreason::Reason aReason, + JS::GCReason aReason, mozilla::TimeStamp aDeadline = mozilla::TimeStamp()); // If user has been idle and aDocShell is for an iframe being loaded in an // already loaded top level docshell, this will run a CC or GC // timer/runner if there is such pending. static void MaybeRunNextCollectorSlice(nsIDocShell *aDocShell, - JS::gcreason::Reason aReason); + JS::GCReason aReason); // The GC should probably run soon, in the zone of object aObj (if given). - static void PokeGC(JS::gcreason::Reason aReason, JSObject *aObj, - int aDelay = 0); + static void PokeGC(JS::GCReason aReason, JSObject *aObj, int aDelay = 0); static void KillGCTimer(); static void PokeShrinkingGC(); diff --git a/dom/indexedDB/ActorsChild.cpp b/dom/indexedDB/ActorsChild.cpp index 7b9209725d9f8..bb39570073021 100644 --- a/dom/indexedDB/ActorsChild.cpp +++ b/dom/indexedDB/ActorsChild.cpp @@ -129,7 +129,7 @@ void MaybeCollectGarbageOnIPCMessage() { return; } - nsJSContext::GarbageCollectNow(JS::gcreason::DOM_IPC); + nsJSContext::GarbageCollectNow(JS::GCReason::DOM_IPC); nsJSContext::CycleCollectNow(); #endif // BUILD_GC_ON_IPC_MESSAGES } diff --git a/dom/ipc/ContentChild.cpp b/dom/ipc/ContentChild.cpp index 3967f4388e03b..2f0b21419b5d1 100644 --- a/dom/ipc/ContentChild.cpp +++ b/dom/ipc/ContentChild.cpp @@ -2472,7 +2472,7 @@ mozilla::ipc::IPCResult ContentChild::RecvGarbageCollect() { if (obs) { obs->NotifyObservers(nullptr, "child-gc-request", nullptr); } - nsJSContext::GarbageCollectNow(JS::gcreason::DOM_IPC); + nsJSContext::GarbageCollectNow(JS::GCReason::DOM_IPC); return IPC_OK(); } diff --git a/dom/workers/WorkerPrivate.cpp b/dom/workers/WorkerPrivate.cpp index 7c85dd16e5367..fb3b15731cca1 100644 --- a/dom/workers/WorkerPrivate.cpp +++ b/dom/workers/WorkerPrivate.cpp @@ -4390,13 +4390,13 @@ void WorkerPrivate::GarbageCollectInternal(JSContext* aCx, bool aShrinking, JS::PrepareForFullGC(aCx); if (aShrinking) { - JS::NonIncrementalGC(aCx, GC_SHRINK, JS::gcreason::DOM_WORKER); + JS::NonIncrementalGC(aCx, GC_SHRINK, JS::GCReason::DOM_WORKER); if (!aCollectChildren) { LOG(WorkerLog(), ("Worker %p collected idle garbage\n", this)); } } else { - JS::NonIncrementalGC(aCx, GC_NORMAL, JS::gcreason::DOM_WORKER); + JS::NonIncrementalGC(aCx, GC_NORMAL, JS::GCReason::DOM_WORKER); LOG(WorkerLog(), ("Worker %p collected garbage\n", this)); } } else { diff --git a/js/ipc/WrapperAnswer.cpp b/js/ipc/WrapperAnswer.cpp index 8abc07cbcd696..612520b360494 100644 --- a/js/ipc/WrapperAnswer.cpp +++ b/js/ipc/WrapperAnswer.cpp @@ -39,7 +39,7 @@ static void MaybeForceDebugGC() { if (sDebugGCs) { JSContext* cx = XPCJSContext::Get()->Context(); PrepareForFullGC(cx); - NonIncrementalGC(cx, GC_NORMAL, gcreason::COMPONENT_UTILS); + NonIncrementalGC(cx, GC_NORMAL, GCReason::COMPONENT_UTILS); } } diff --git a/js/public/GCAPI.h b/js/public/GCAPI.h index 59e829e847601..931d59596bb6e 100644 --- a/js/public/GCAPI.h +++ b/js/public/GCAPI.h @@ -406,10 +406,7 @@ namespace JS { D(DOCSHELL, 54) \ D(HTML_PARSER, 55) -namespace gcreason { - -/* GCReasons will end up looking like JSGC_MAYBEGC */ -enum Reason { +enum class GCReason { #define MAKE_REASON(name, val) name = val, GCREASONS(MAKE_REASON) #undef MAKE_REASON @@ -418,9 +415,8 @@ enum Reason { /* * For telemetry, we want to keep a fixed max bucket size over time so we - * don't have to switch histograms. 100 is conservative; as of this writing - * there are 52. But the cost of extra buckets seems to be low while the - * cost of switching histograms is high. + * don't have to switch histograms. 100 is conservative; but the cost of extra + * buckets seems to be low while the cost of switching histograms is high. */ NUM_TELEMETRY_REASONS = 100 }; @@ -428,9 +424,7 @@ enum Reason { /** * Get a statically allocated C string explaining the given GC reason. */ -extern JS_PUBLIC_API const char* ExplainReason(JS::gcreason::Reason reason); - -} /* namespace gcreason */ +extern JS_PUBLIC_API const char* ExplainGCReason(JS::GCReason reason); /* * Zone GC: @@ -492,7 +486,7 @@ extern JS_PUBLIC_API void SkipZoneForGC(Zone* zone); */ extern JS_PUBLIC_API void NonIncrementalGC(JSContext* cx, JSGCInvocationKind gckind, - gcreason::Reason reason); + GCReason reason); /* * Incremental GC: @@ -525,7 +519,7 @@ extern JS_PUBLIC_API void NonIncrementalGC(JSContext* cx, */ extern JS_PUBLIC_API void StartIncrementalGC(JSContext* cx, JSGCInvocationKind gckind, - gcreason::Reason reason, + GCReason reason, int64_t millis = 0); /** @@ -536,8 +530,7 @@ extern JS_PUBLIC_API void StartIncrementalGC(JSContext* cx, * Note: SpiderMonkey's GC is not realtime. Slices in practice may be longer or * shorter than the requested interval. */ -extern JS_PUBLIC_API void IncrementalGCSlice(JSContext* cx, - gcreason::Reason reason, +extern JS_PUBLIC_API void IncrementalGCSlice(JSContext* cx, GCReason reason, int64_t millis = 0); /** @@ -546,8 +539,7 @@ extern JS_PUBLIC_API void IncrementalGCSlice(JSContext* cx, * this is equivalent to NonIncrementalGC. When this function returns, * IsIncrementalGCInProgress(cx) will always be false. */ -extern JS_PUBLIC_API void FinishIncrementalGC(JSContext* cx, - gcreason::Reason reason); +extern JS_PUBLIC_API void FinishIncrementalGC(JSContext* cx, GCReason reason); /** * If IsIncrementalGCInProgress(cx), this call aborts the ongoing collection and @@ -623,10 +615,10 @@ struct JS_PUBLIC_API GCDescription { bool isZone_; bool isComplete_; JSGCInvocationKind invocationKind_; - gcreason::Reason reason_; + GCReason reason_; GCDescription(bool isZone, bool isComplete, JSGCInvocationKind kind, - gcreason::Reason reason) + GCReason reason) : isZone_(isZone), isComplete_(isComplete), invocationKind_(kind), @@ -681,7 +673,7 @@ enum class GCNurseryProgress { */ using GCNurseryCollectionCallback = void (*)(JSContext* cx, GCNurseryProgress progress, - gcreason::Reason reason); + GCReason reason); /** * Set the nursery collection callback for the given runtime. When set, it will diff --git a/js/src/builtin/TestingFunctions.cpp b/js/src/builtin/TestingFunctions.cpp index 433f0b716db86..efb9fd9befb3f 100644 --- a/js/src/builtin/TestingFunctions.cpp +++ b/js/src/builtin/TestingFunctions.cpp @@ -428,7 +428,7 @@ static bool GC(JSContext* cx, unsigned argc, Value* vp) { } JSGCInvocationKind gckind = shrinking ? GC_SHRINK : GC_NORMAL; - JS::NonIncrementalGC(cx, gckind, JS::gcreason::API); + JS::NonIncrementalGC(cx, gckind, JS::GCReason::API); char buf[256] = {'\0'}; #ifndef JS_MORE_DETERMINISTIC @@ -442,10 +442,10 @@ static bool MinorGC(JSContext* cx, unsigned argc, Value* vp) { CallArgs args = CallArgsFromVp(argc, vp); if (args.get(0) == BooleanValue(true)) { cx->runtime()->gc.storeBuffer().setAboutToOverflow( - JS::gcreason::FULL_GENERIC_BUFFER); + JS::GCReason::FULL_GENERIC_BUFFER); } - cx->minorGC(JS::gcreason::API); + cx->minorGC(JS::GCReason::API); args.rval().setUndefined(); return true; } @@ -591,7 +591,7 @@ static bool RelazifyFunctions(JSContext* cx, unsigned argc, Value* vp) { SetAllowRelazification(cx, true); JS::PrepareForFullGC(cx); - JS::NonIncrementalGC(cx, GC_SHRINK, JS::gcreason::API); + JS::NonIncrementalGC(cx, GC_SHRINK, JS::GCReason::API); SetAllowRelazification(cx, false); args.rval().setUndefined(); @@ -4186,7 +4186,7 @@ static void majorGC(JSContext* cx, JSGCStatus status, void* data) { if (info->depth > 0) { info->depth--; JS::PrepareForFullGC(cx); - JS::NonIncrementalGC(cx, GC_NORMAL, JS::gcreason::API); + JS::NonIncrementalGC(cx, GC_NORMAL, JS::GCReason::API); info->depth++; } } @@ -4205,7 +4205,7 @@ static void minorGC(JSContext* cx, JSGCStatus status, void* data) { if (info->active) { info->active = false; if (cx->zone() && !cx->zone()->isAtomsZone()) { - cx->runtime()->gc.evictNursery(JS::gcreason::DEBUG_GC); + cx->runtime()->gc.evictNursery(JS::GCReason::DEBUG_GC); } info->active = true; } diff --git a/js/src/fuzz-tests/testBinASTReader.cpp b/js/src/fuzz-tests/testBinASTReader.cpp index 4fe17eea8f875..fdf956211ac27 100644 --- a/js/src/fuzz-tests/testBinASTReader.cpp +++ b/js/src/fuzz-tests/testBinASTReader.cpp @@ -35,7 +35,7 @@ static int testBinASTReaderFuzz(const uint8_t* buf, size_t size) { auto gcGuard = mozilla::MakeScopeExit([&] { JS::PrepareForFullGC(gCx); - JS::NonIncrementalGC(gCx, GC_NORMAL, JS::gcreason::API); + JS::NonIncrementalGC(gCx, GC_NORMAL, JS::GCReason::API); }); if (!size) return 0; diff --git a/js/src/fuzz-tests/testExample.cpp b/js/src/fuzz-tests/testExample.cpp index 389b037c1aa58..311f230b3abc3 100644 --- a/js/src/fuzz-tests/testExample.cpp +++ b/js/src/fuzz-tests/testExample.cpp @@ -37,7 +37,7 @@ static int testExampleFuzz(const uint8_t* buf, size_t size) { if it is not required in your use case, which will speed up fuzzing. */ auto gcGuard = mozilla::MakeScopeExit([&] { JS::PrepareForFullGC(gCx); - JS::NonIncrementalGC(gCx, GC_NORMAL, JS::gcreason::API); + JS::NonIncrementalGC(gCx, GC_NORMAL, JS::GCReason::API); }); /* Add code here that processes the given buffer. diff --git a/js/src/fuzz-tests/testStructuredCloneReader.cpp b/js/src/fuzz-tests/testStructuredCloneReader.cpp index ed59ba1614f9e..0f8e92f15b8ac 100644 --- a/js/src/fuzz-tests/testStructuredCloneReader.cpp +++ b/js/src/fuzz-tests/testStructuredCloneReader.cpp @@ -26,7 +26,7 @@ static int testStructuredCloneReaderInit(int* argc, char*** argv) { return 0; } static int testStructuredCloneReaderFuzz(const uint8_t* buf, size_t size) { auto gcGuard = mozilla::MakeScopeExit([&] { JS::PrepareForFullGC(gCx); - JS::NonIncrementalGC(gCx, GC_NORMAL, JS::gcreason::API); + JS::NonIncrementalGC(gCx, GC_NORMAL, JS::GCReason::API); }); if (!size) return 0; diff --git a/js/src/gc/Allocator.cpp b/js/src/gc/Allocator.cpp index 0d846da729213..4fdd4f23cc87f 100644 --- a/js/src/gc/Allocator.cpp +++ b/js/src/gc/Allocator.cpp @@ -108,7 +108,7 @@ JSObject* GCRuntime::tryNewNurseryObject(JSContext* cx, size_t thingSize, } if (allowGC && !cx->suppressGC) { - cx->runtime()->gc.minorGC(JS::gcreason::OUT_OF_NURSERY); + cx->runtime()->gc.minorGC(JS::GCReason::OUT_OF_NURSERY); // Exceeding gcMaxBytes while tenuring can disable the Nursery. if (cx->nursery().isEnabled()) { @@ -164,7 +164,7 @@ JSString* GCRuntime::tryNewNurseryString(JSContext* cx, size_t thingSize, } if (allowGC && !cx->suppressGC) { - cx->runtime()->gc.minorGC(JS::gcreason::OUT_OF_NURSERY); + cx->runtime()->gc.minorGC(JS::GCReason::OUT_OF_NURSERY); // Exceeding gcMaxBytes while tenuring can disable the Nursery, and // other heuristics can disable nursery strings for this zone. @@ -276,7 +276,7 @@ template // all-compartments, non-incremental, shrinking GC and wait for // sweeping to finish. JS::PrepareForFullGC(cx); - cx->runtime()->gc.gc(GC_SHRINK, JS::gcreason::LAST_DITCH); + cx->runtime()->gc.gc(GC_SHRINK, JS::GCReason::LAST_DITCH); cx->runtime()->gc.waitBackgroundSweepOrAllocEnd(); t = tryNewTenuredThing(cx, kind, thingSize); @@ -351,7 +351,7 @@ bool GCRuntime::gcIfNeededAtAllocation(JSContext* cx) { if (isIncrementalGCInProgress() && cx->zone()->zoneSize.gcBytes() > cx->zone()->threshold.gcTriggerBytes()) { PrepareZoneForGC(cx->zone()); - gc(GC_NORMAL, JS::gcreason::INCREMENTAL_TOO_SLOW); + gc(GC_NORMAL, JS::GCReason::INCREMENTAL_TOO_SLOW); } return true; diff --git a/js/src/gc/ArenaList.h b/js/src/gc/ArenaList.h index 3486f934e4586..5736572826ffe 100644 --- a/js/src/gc/ArenaList.h +++ b/js/src/gc/ArenaList.h @@ -335,7 +335,7 @@ class ArenaLists { bool checkEmptyArenaList(AllocKind kind); - bool relocateArenas(Arena*& relocatedListOut, JS::gcreason::Reason reason, + bool relocateArenas(Arena*& relocatedListOut, JS::GCReason reason, js::SliceBudget& sliceBudget, gcstats::Statistics& stats); void queueForegroundObjectsForSweep(FreeOp* fop); diff --git a/js/src/gc/GC.cpp b/js/src/gc/GC.cpp index 6bc751995b1fe..2d107462712cb 100644 --- a/js/src/gc/GC.cpp +++ b/js/src/gc/GC.cpp @@ -915,7 +915,7 @@ GCRuntime::GCRuntime(JSRuntime* rt) cleanUpEverything(false), grayBufferState(GCRuntime::GrayBufferState::Unused), grayBitsValid(false), - majorGCTriggerReason(JS::gcreason::NO_REASON), + majorGCTriggerReason(JS::GCReason::NO_REASON), fullGCForAtomsRequested_(false), minorGCNumber(0), majorGCNumber(0), @@ -1053,12 +1053,12 @@ void GCRuntime::setZeal(uint8_t zeal, uint32_t frequency) { if (zeal == 0) { if (hasZealMode(ZealMode::GenerationalGC)) { - evictNursery(JS::gcreason::DEBUG_GC); + evictNursery(JS::GCReason::DEBUG_GC); nursery().leaveZealMode(); } if (isIncrementalGCInProgress()) { - finishGC(JS::gcreason::DEBUG_GC); + finishGC(JS::GCReason::DEBUG_GC); } } @@ -1098,7 +1098,7 @@ void GCRuntime::unsetZeal(uint8_t zeal) { } if (zealMode == ZealMode::GenerationalGC) { - evictNursery(JS::gcreason::DEBUG_GC); + evictNursery(JS::GCReason::DEBUG_GC); nursery().leaveZealMode(); } @@ -1106,7 +1106,7 @@ void GCRuntime::unsetZeal(uint8_t zeal) { if (zealModeBits == 0) { if (isIncrementalGCInProgress()) { - finishGC(JS::gcreason::DEBUG_GC); + finishGC(JS::GCReason::DEBUG_GC); } zealFrequency = 0; @@ -2075,8 +2075,8 @@ bool GCRuntime::shouldCompact() { return false; } - if (initialReason == JS::gcreason::USER_INACTIVE || - initialReason == JS::gcreason::MEM_PRESSURE) { + if (initialReason == JS::GCReason::USER_INACTIVE || + initialReason == JS::GCReason::MEM_PRESSURE) { return true; } @@ -2122,8 +2122,8 @@ Arena* ArenaList::removeRemainingArenas(Arena** arenap) { return remainingArenas; } -static bool ShouldRelocateAllArenas(JS::gcreason::Reason reason) { - return reason == JS::gcreason::DEBUG_GC; +static bool ShouldRelocateAllArenas(JS::GCReason reason) { + return reason == JS::GCReason::DEBUG_GC; } /* @@ -2297,12 +2297,12 @@ static inline bool CanProtectArenas() { return SystemPageSize() <= ArenaSize; } -static inline bool ShouldProtectRelocatedArenas(JS::gcreason::Reason reason) { +static inline bool ShouldProtectRelocatedArenas(JS::GCReason reason) { // For zeal mode collections we don't release the relocated arenas // immediately. Instead we protect them and keep them around until the next // collection so we can catch any stray accesses to them. #ifdef DEBUG - return reason == JS::gcreason::DEBUG_GC && CanProtectArenas(); + return reason == JS::GCReason::DEBUG_GC && CanProtectArenas(); #else return false; #endif @@ -2336,7 +2336,7 @@ Arena* ArenaList::relocateArenas(Arena* toRelocate, Arena* relocated, static const float MIN_ZONE_RECLAIM_PERCENT = 2.0; static bool ShouldRelocateZone(size_t arenaCount, size_t relocCount, - JS::gcreason::Reason reason) { + JS::GCReason reason) { if (relocCount == 0) { return false; } @@ -2358,8 +2358,7 @@ static AllocKinds CompactingAllocKinds() { return result; } -bool ArenaLists::relocateArenas(Arena*& relocatedListOut, - JS::gcreason::Reason reason, +bool ArenaLists::relocateArenas(Arena*& relocatedListOut, JS::GCReason reason, SliceBudget& sliceBudget, gcstats::Statistics& stats) { // This is only called from the main thread while we are doing a GC, so @@ -2411,7 +2410,7 @@ bool ArenaLists::relocateArenas(Arena*& relocatedListOut, return true; } -bool GCRuntime::relocateArenas(Zone* zone, JS::gcreason::Reason reason, +bool GCRuntime::relocateArenas(Zone* zone, JS::GCReason reason, Arena*& relocatedListOut, SliceBudget& sliceBudget) { gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::COMPACT_MOVE); @@ -3190,7 +3189,7 @@ bool SliceBudget::checkOverBudget() { return over; } -void GCRuntime::requestMajorGC(JS::gcreason::Reason reason) { +void GCRuntime::requestMajorGC(JS::GCReason reason) { MOZ_ASSERT(!CurrentThreadIsPerformingGC()); if (majorGCRequested()) { @@ -3201,7 +3200,7 @@ void GCRuntime::requestMajorGC(JS::gcreason::Reason reason) { rt->mainContextFromOwnThread()->requestInterrupt(InterruptReason::GC); } -void Nursery::requestMinorGC(JS::gcreason::Reason reason) const { +void Nursery::requestMinorGC(JS::GCReason reason) const { MOZ_ASSERT(CurrentThreadCanAccessRuntime(runtime())); MOZ_ASSERT(!CurrentThreadIsPerformingGC()); @@ -3222,18 +3221,18 @@ void Nursery::requestMinorGC(JS::gcreason::Reason reason) const { // memory or memory used by GC things may vary between recording or replaying, // but other behaviors that would normally be non-deterministic (timers and so // forth) are captured in the recording and replayed exactly. -static bool RecordReplayCheckCanGC(JS::gcreason::Reason reason) { +static bool RecordReplayCheckCanGC(JS::GCReason reason) { if (!mozilla::recordreplay::IsRecordingOrReplaying()) { return true; } switch (reason) { - case JS::gcreason::EAGER_ALLOC_TRIGGER: - case JS::gcreason::LAST_DITCH: - case JS::gcreason::TOO_MUCH_MALLOC: - case JS::gcreason::ALLOC_TRIGGER: - case JS::gcreason::DELAYED_ATOMS_GC: - case JS::gcreason::TOO_MUCH_WASM_MEMORY: + case JS::GCReason::EAGER_ALLOC_TRIGGER: + case JS::GCReason::LAST_DITCH: + case JS::GCReason::TOO_MUCH_MALLOC: + case JS::GCReason::ALLOC_TRIGGER: + case JS::GCReason::DELAYED_ATOMS_GC: + case JS::GCReason::TOO_MUCH_WASM_MEMORY: return false; default: @@ -3247,7 +3246,7 @@ static bool RecordReplayCheckCanGC(JS::gcreason::Reason reason) { return true; } -bool GCRuntime::triggerGC(JS::gcreason::Reason reason) { +bool GCRuntime::triggerGC(JS::GCReason reason) { /* * Don't trigger GCs if this is being called off the main thread from * onTooMuchMalloc(). @@ -3286,7 +3285,7 @@ void GCRuntime::maybeAllocTriggerZoneGC(Zone* zone, const AutoLockGC& lock) { if (usedBytes >= thresholdBytes) { // The threshold has been surpassed, immediately trigger a GC, which // will be done non-incrementally. - triggerZoneGC(zone, JS::gcreason::ALLOC_TRIGGER, usedBytes, thresholdBytes); + triggerZoneGC(zone, JS::GCReason::ALLOC_TRIGGER, usedBytes, thresholdBytes); return; } @@ -3311,7 +3310,7 @@ void GCRuntime::maybeAllocTriggerZoneGC(Zone* zone, const AutoLockGC& lock) { // to try to avoid performing non-incremental GCs on zones // which allocate a lot of data, even when incremental slices // can't be triggered via scheduling in the event loop. - triggerZoneGC(zone, JS::gcreason::ALLOC_TRIGGER, usedBytes, + triggerZoneGC(zone, JS::GCReason::ALLOC_TRIGGER, usedBytes, igcThresholdBytes); // Delay the next slice until a certain amount of allocation @@ -3322,8 +3321,8 @@ void GCRuntime::maybeAllocTriggerZoneGC(Zone* zone, const AutoLockGC& lock) { } } -bool GCRuntime::triggerZoneGC(Zone* zone, JS::gcreason::Reason reason, - size_t used, size_t threshold) { +bool GCRuntime::triggerZoneGC(Zone* zone, JS::GCReason reason, size_t used, + size_t threshold) { MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt)); /* GC is already running. */ @@ -3367,7 +3366,7 @@ void GCRuntime::maybeGC(Zone* zone) { #ifdef JS_GC_ZEAL if (hasZealMode(ZealMode::Alloc) || hasZealMode(ZealMode::RootsChange)) { JS::PrepareForFullGC(rt->mainContextFromOwnThread()); - gc(GC_NORMAL, JS::gcreason::DEBUG_GC); + gc(GC_NORMAL, JS::GCReason::DEBUG_GC); return; } #endif @@ -3383,7 +3382,7 @@ void GCRuntime::maybeGC(Zone* zone) { !isIncrementalGCInProgress() && !isBackgroundSweeping()) { stats().recordTrigger(usedBytes, threshold); PrepareZoneForGC(zone); - startGC(GC_NORMAL, JS::gcreason::EAGER_ALLOC_TRIGGER); + startGC(GC_NORMAL, JS::GCReason::EAGER_ALLOC_TRIGGER); } } @@ -3393,7 +3392,7 @@ void GCRuntime::triggerFullGCForAtoms(JSContext* cx) { MOZ_ASSERT(!JS::RuntimeHeapIsCollecting()); MOZ_ASSERT(cx->canCollectAtoms()); fullGCForAtomsRequested_ = false; - MOZ_RELEASE_ASSERT(triggerGC(JS::gcreason::DELAYED_ATOMS_GC)); + MOZ_RELEASE_ASSERT(triggerGC(JS::GCReason::DELAYED_ATOMS_GC)); } // Do all possible decommit immediately from the current thread without @@ -3962,7 +3961,7 @@ void GCRuntime::purgeRuntime() { bool GCRuntime::shouldPreserveJITCode(Realm* realm, const TimeStamp& currentTime, - JS::gcreason::Reason reason, + JS::GCReason reason, bool canAllocateMoreCode) { static const auto oneSecond = TimeDuration::FromSeconds(1); @@ -3986,7 +3985,7 @@ bool GCRuntime::shouldPreserveJITCode(Realm* realm, return true; } - if (reason == JS::gcreason::DEBUG_GC) { + if (reason == JS::GCReason::DEBUG_GC) { return true; } @@ -4107,10 +4106,10 @@ static void RelazifyFunctions(Zone* zone, AllocKind kind) { } } -static bool ShouldCollectZone(Zone* zone, JS::gcreason::Reason reason) { +static bool ShouldCollectZone(Zone* zone, JS::GCReason reason) { // If we are repeating a GC because we noticed dead compartments haven't // been collected, then only collect zones containing those compartments. - if (reason == JS::gcreason::COMPARTMENT_REVIVED) { + if (reason == JS::GCReason::COMPARTMENT_REVIVED) { for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next()) { if (comp->gcState.scheduledForDestruction) { return true; @@ -4147,7 +4146,7 @@ static bool ShouldCollectZone(Zone* zone, JS::gcreason::Reason reason) { return zone->canCollect(); } -bool GCRuntime::prepareZonesForCollection(JS::gcreason::Reason reason, +bool GCRuntime::prepareZonesForCollection(JS::GCReason reason, bool* isFullOut) { #ifdef DEBUG /* Assert that zone state is as we expect */ @@ -4211,7 +4210,7 @@ bool GCRuntime::prepareZonesForCollection(JS::gcreason::Reason reason, * Check that we do collect the atoms zone if we triggered a GC for that * purpose. */ - MOZ_ASSERT_IF(reason == JS::gcreason::DELAYED_ATOMS_GC, + MOZ_ASSERT_IF(reason == JS::GCReason::DELAYED_ATOMS_GC, atomsZone->isGCMarking()); /* Check that at least one zone is scheduled for collection. */ @@ -4269,8 +4268,7 @@ static void BufferGrayRoots(GCParallelTask* task) { task->runtime()->gc.bufferGrayRoots(); } -bool GCRuntime::beginMarkPhase(JS::gcreason::Reason reason, - AutoGCSession& session) { +bool GCRuntime::beginMarkPhase(JS::GCReason reason, AutoGCSession& session) { #ifdef DEBUG if (fullCompartmentChecks) { checkForCompartmentMismatches(); @@ -4931,7 +4929,7 @@ bool GCRuntime::findInterZoneEdges() { return true; } -void GCRuntime::groupZonesForSweeping(JS::gcreason::Reason reason) { +void GCRuntime::groupZonesForSweeping(JS::GCReason reason) { #ifdef DEBUG for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) { MOZ_ASSERT(zone->gcSweepGroupEdges().empty()); @@ -5809,8 +5807,7 @@ IncrementalProgress GCRuntime::endSweepingSweepGroup(FreeOp* fop, return Finished; } -void GCRuntime::beginSweepPhase(JS::gcreason::Reason reason, - AutoGCSession& session) { +void GCRuntime::beginSweepPhase(JS::GCReason reason, AutoGCSession& session) { /* * Sweep phase. * @@ -6655,7 +6652,7 @@ void GCRuntime::beginCompactPhase() { startedCompacting = true; } -IncrementalProgress GCRuntime::compactPhase(JS::gcreason::Reason reason, +IncrementalProgress GCRuntime::compactPhase(JS::GCReason reason, SliceBudget& sliceBudget, AutoGCSession& session) { assertBackgroundSweepingFinished(); @@ -6918,12 +6915,12 @@ void GCRuntime::pushZealSelectedObjects() { #endif } -static bool IsShutdownGC(JS::gcreason::Reason reason) { - return reason == JS::gcreason::SHUTDOWN_CC || - reason == JS::gcreason::DESTROY_RUNTIME; +static bool IsShutdownGC(JS::GCReason reason) { + return reason == JS::GCReason::SHUTDOWN_CC || + reason == JS::GCReason::DESTROY_RUNTIME; } -static bool ShouldCleanUpEverything(JS::gcreason::Reason reason, +static bool ShouldCleanUpEverything(JS::GCReason reason, JSGCInvocationKind gckind) { // During shutdown, we must clean everything up, for the sake of leak // detection. When a runtime has no contexts, or we're doing a GC before a @@ -6931,17 +6928,16 @@ static bool ShouldCleanUpEverything(JS::gcreason::Reason reason, return IsShutdownGC(reason) || gckind == GC_SHRINK; } -static bool ShouldSweepOnBackgroundThread(JS::gcreason::Reason reason) { - return reason != JS::gcreason::DESTROY_RUNTIME && !gcTracer.traceEnabled() && +static bool ShouldSweepOnBackgroundThread(JS::GCReason reason) { + return reason != JS::GCReason::DESTROY_RUNTIME && !gcTracer.traceEnabled() && CanUseExtraThreads(); } -void GCRuntime::incrementalSlice(SliceBudget& budget, - JS::gcreason::Reason reason, +void GCRuntime::incrementalSlice(SliceBudget& budget, JS::GCReason reason, AutoGCSession& session) { AutoDisableBarriers disableBarriers(rt); - bool destroyingRuntime = (reason == JS::gcreason::DESTROY_RUNTIME); + bool destroyingRuntime = (reason == JS::GCReason::DESTROY_RUNTIME); number++; @@ -6953,7 +6949,7 @@ void GCRuntime::incrementalSlice(SliceBudget& budget, * collection was triggered by runDebugGC() and incremental GC has not been * cancelled by resetIncrementalGC(). */ - useZeal = reason == JS::gcreason::DEBUG_GC && !budget.isUnlimited(); + useZeal = reason == JS::GCReason::DEBUG_GC && !budget.isUnlimited(); #else bool useZeal = false; #endif @@ -7182,7 +7178,7 @@ gc::AbortReason gc::IsIncrementalGCUnsafe(JSRuntime* rt) { return gc::AbortReason::None; } -static inline void CheckZoneIsScheduled(Zone* zone, JS::gcreason::Reason reason, +static inline void CheckZoneIsScheduled(Zone* zone, JS::GCReason reason, const char* trigger) { #ifdef DEBUG if (zone->isGCScheduled()) { @@ -7192,7 +7188,7 @@ static inline void CheckZoneIsScheduled(Zone* zone, JS::gcreason::Reason reason, fprintf(stderr, "CheckZoneIsScheduled: Zone %p not scheduled as expected in %s GC " "for %s trigger\n", - zone, JS::gcreason::ExplainReason(reason), trigger); + zone, JS::ExplainGCReason(reason), trigger); JSRuntime* rt = zone->runtimeFromMainThread(); for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) { fprintf(stderr, " Zone %p:%s%s\n", zone.get(), @@ -7205,8 +7201,7 @@ static inline void CheckZoneIsScheduled(Zone* zone, JS::gcreason::Reason reason, } GCRuntime::IncrementalResult GCRuntime::budgetIncrementalGC( - bool nonincrementalByAPI, JS::gcreason::Reason reason, - SliceBudget& budget) { + bool nonincrementalByAPI, JS::GCReason reason, SliceBudget& budget) { if (nonincrementalByAPI) { stats().nonincremental(gc::AbortReason::NonIncrementalRequested); budget.makeUnlimited(); @@ -7215,14 +7210,14 @@ GCRuntime::IncrementalResult GCRuntime::budgetIncrementalGC( // API. This isn't required for correctness, but sometimes during tests // the caller expects this GC to collect certain objects, and we need // to make sure to collect everything possible. - if (reason != JS::gcreason::ALLOC_TRIGGER) { + if (reason != JS::GCReason::ALLOC_TRIGGER) { return resetIncrementalGC(gc::AbortReason::NonIncrementalRequested); } return IncrementalResult::Ok; } - if (reason == JS::gcreason::ABORT_GC) { + if (reason == JS::GCReason::ABORT_GC) { budget.makeUnlimited(); stats().nonincremental(gc::AbortReason::AbortRequested); return resetIncrementalGC(gc::AbortReason::AbortRequested); @@ -7230,7 +7225,7 @@ GCRuntime::IncrementalResult GCRuntime::budgetIncrementalGC( AbortReason unsafeReason = IsIncrementalGCUnsafe(rt); if (unsafeReason == AbortReason::None) { - if (reason == JS::gcreason::COMPARTMENT_REVIVED) { + if (reason == JS::GCReason::COMPARTMENT_REVIVED) { unsafeReason = gc::AbortReason::CompartmentRevived; } else if (mode != JSGC_MODE_INCREMENTAL) { unsafeReason = gc::AbortReason::ModeChange; @@ -7381,7 +7376,7 @@ void GCRuntime::maybeCallGCCallback(JSGCStatus status) { * implementation. */ MOZ_NEVER_INLINE GCRuntime::IncrementalResult GCRuntime::gcCycle( - bool nonincrementalByAPI, SliceBudget budget, JS::gcreason::Reason reason) { + bool nonincrementalByAPI, SliceBudget budget, JS::GCReason reason) { // Assert if this is a GC unsafe region. rt->mainContextFromOwnThread()->verifyIsSafeToGC(); @@ -7397,7 +7392,7 @@ MOZ_NEVER_INLINE GCRuntime::IncrementalResult GCRuntime::gcCycle( auto result = budgetIncrementalGC(nonincrementalByAPI, reason, budget); if (result == IncrementalResult::ResetIncremental) { - reason = JS::gcreason::RESET; + reason = JS::GCReason::RESET; } if (shouldCollectNurseryForSlice(nonincrementalByAPI, budget)) { @@ -7406,7 +7401,7 @@ MOZ_NEVER_INLINE GCRuntime::IncrementalResult GCRuntime::gcCycle( AutoGCSession session(rt, JS::HeapState::MajorCollecting); - majorGCTriggerReason = JS::gcreason::NO_REASON; + majorGCTriggerReason = JS::GCReason::NO_REASON; { gcstats::AutoPhase ap(stats(), gcstats::PhaseKind::WAIT_BACKGROUND_THREAD); @@ -7479,18 +7474,18 @@ bool GCRuntime::shouldCollectNurseryForSlice(bool nonincrementalByAPI, } #ifdef JS_GC_ZEAL -static bool IsDeterministicGCReason(JS::gcreason::Reason reason) { +static bool IsDeterministicGCReason(JS::GCReason reason) { switch (reason) { - case JS::gcreason::API: - case JS::gcreason::DESTROY_RUNTIME: - case JS::gcreason::LAST_DITCH: - case JS::gcreason::TOO_MUCH_MALLOC: - case JS::gcreason::TOO_MUCH_WASM_MEMORY: - case JS::gcreason::ALLOC_TRIGGER: - case JS::gcreason::DEBUG_GC: - case JS::gcreason::CC_FORCED: - case JS::gcreason::SHUTDOWN_CC: - case JS::gcreason::ABORT_GC: + case JS::GCReason::API: + case JS::GCReason::DESTROY_RUNTIME: + case JS::GCReason::LAST_DITCH: + case JS::GCReason::TOO_MUCH_MALLOC: + case JS::GCReason::TOO_MUCH_WASM_MEMORY: + case JS::GCReason::ALLOC_TRIGGER: + case JS::GCReason::DEBUG_GC: + case JS::GCReason::CC_FORCED: + case JS::GCReason::SHUTDOWN_CC: + case JS::GCReason::ABORT_GC: return true; default: @@ -7547,7 +7542,7 @@ void GCRuntime::checkCanCallAPI() { MOZ_RELEASE_ASSERT(!JS::RuntimeHeapIsBusy()); } -bool GCRuntime::checkIfGCAllowedInCurrentState(JS::gcreason::Reason reason) { +bool GCRuntime::checkIfGCAllowedInCurrentState(JS::GCReason reason) { if (rt->mainContextFromOwnThread()->suppressGC) { return false; } @@ -7567,8 +7562,8 @@ bool GCRuntime::checkIfGCAllowedInCurrentState(JS::gcreason::Reason reason) { return true; } -bool GCRuntime::shouldRepeatForDeadZone(JS::gcreason::Reason reason) { - MOZ_ASSERT_IF(reason == JS::gcreason::COMPARTMENT_REVIVED, !isIncremental); +bool GCRuntime::shouldRepeatForDeadZone(JS::GCReason reason) { + MOZ_ASSERT_IF(reason == JS::GCReason::COMPARTMENT_REVIVED, !isIncremental); MOZ_ASSERT(!isIncrementalGCInProgress()); if (!isIncremental) { @@ -7585,7 +7580,7 @@ bool GCRuntime::shouldRepeatForDeadZone(JS::gcreason::Reason reason) { } void GCRuntime::collect(bool nonincrementalByAPI, SliceBudget budget, - JS::gcreason::Reason reason) { + JS::GCReason reason) { // Checks run for each request, even if we do not actually GC. checkCanCallAPI(); @@ -7607,7 +7602,7 @@ void GCRuntime::collect(bool nonincrementalByAPI, SliceBudget budget, IncrementalResult cycleResult = gcCycle(nonincrementalByAPI, budget, reason); - if (reason == JS::gcreason::ABORT_GC) { + if (reason == JS::GCReason::ABORT_GC) { MOZ_ASSERT(!isIncrementalGCInProgress()); stats().writeLogMessage("GC aborted by request"); break; @@ -7629,10 +7624,10 @@ void GCRuntime::collect(bool nonincrementalByAPI, SliceBudget budget, /* Need to re-schedule all zones for GC. */ JS::PrepareForFullGC(rt->mainContextFromOwnThread()); repeat = true; - reason = JS::gcreason::ROOTS_REMOVED; + reason = JS::GCReason::ROOTS_REMOVED; } else if (shouldRepeatForDeadZone(reason)) { repeat = true; - reason = JS::gcreason::COMPARTMENT_REVIVED; + reason = JS::GCReason::COMPARTMENT_REVIVED; } } } while (repeat); @@ -7646,7 +7641,7 @@ void GCRuntime::collect(bool nonincrementalByAPI, SliceBudget budget, } #endif - if (reason == JS::gcreason::COMPARTMENT_REVIVED) { + if (reason == JS::GCReason::COMPARTMENT_REVIVED) { maybeDoCycleCollection(); } @@ -7669,10 +7664,9 @@ js::AutoEnqueuePendingParseTasksAfterGC:: } } -SliceBudget GCRuntime::defaultBudget(JS::gcreason::Reason reason, - int64_t millis) { +SliceBudget GCRuntime::defaultBudget(JS::GCReason reason, int64_t millis) { if (millis == 0) { - if (reason == JS::gcreason::ALLOC_TRIGGER) { + if (reason == JS::GCReason::ALLOC_TRIGGER) { millis = defaultSliceBudget(); } else if (schedulingState.inHighFrequencyGCMode() && tunables.isDynamicMarkSliceEnabled()) { @@ -7685,7 +7679,7 @@ SliceBudget GCRuntime::defaultBudget(JS::gcreason::Reason reason, return SliceBudget(TimeBudget(millis)); } -void GCRuntime::gc(JSGCInvocationKind gckind, JS::gcreason::Reason reason) { +void GCRuntime::gc(JSGCInvocationKind gckind, JS::GCReason reason) { // Watch out for calls to gc() that don't go through triggerGC(). if (!RecordReplayCheckCanGC(reason)) { return; @@ -7695,7 +7689,7 @@ void GCRuntime::gc(JSGCInvocationKind gckind, JS::gcreason::Reason reason) { collect(true, SliceBudget::unlimited(), reason); } -void GCRuntime::startGC(JSGCInvocationKind gckind, JS::gcreason::Reason reason, +void GCRuntime::startGC(JSGCInvocationKind gckind, JS::GCReason reason, int64_t millis) { MOZ_ASSERT(!isIncrementalGCInProgress()); if (!JS::IsIncrementalGCEnabled(rt->mainContextFromOwnThread())) { @@ -7706,12 +7700,12 @@ void GCRuntime::startGC(JSGCInvocationKind gckind, JS::gcreason::Reason reason, collect(false, defaultBudget(reason, millis), reason); } -void GCRuntime::gcSlice(JS::gcreason::Reason reason, int64_t millis) { +void GCRuntime::gcSlice(JS::GCReason reason, int64_t millis) { MOZ_ASSERT(isIncrementalGCInProgress()); collect(false, defaultBudget(reason, millis), reason); } -void GCRuntime::finishGC(JS::gcreason::Reason reason) { +void GCRuntime::finishGC(JS::GCReason reason) { MOZ_ASSERT(isIncrementalGCInProgress()); // If we're not collecting because we're out of memory then skip the @@ -7734,7 +7728,7 @@ void GCRuntime::abortGC() { checkCanCallAPI(); MOZ_ASSERT(!rt->mainContextFromOwnThread()->suppressGC); - collect(false, SliceBudget::unlimited(), JS::gcreason::ABORT_GC); + collect(false, SliceBudget::unlimited(), JS::GCReason::ABORT_GC); } static bool ZonesSelected(JSRuntime* rt) { @@ -7752,7 +7746,7 @@ void GCRuntime::startDebugGC(JSGCInvocationKind gckind, SliceBudget& budget) { JS::PrepareForFullGC(rt->mainContextFromOwnThread()); } invocationKind = gckind; - collect(false, budget, JS::gcreason::DEBUG_GC); + collect(false, budget, JS::GCReason::DEBUG_GC); } void GCRuntime::debugGCSlice(SliceBudget& budget) { @@ -7760,7 +7754,7 @@ void GCRuntime::debugGCSlice(SliceBudget& budget) { if (!ZonesSelected(rt)) { JS::PrepareForIncrementalGC(rt->mainContextFromOwnThread()); } - collect(false, budget, JS::gcreason::DEBUG_GC); + collect(false, budget, JS::GCReason::DEBUG_GC); } /* Schedule a full GC unless a zone will already be collected. */ @@ -7798,10 +7792,10 @@ void GCRuntime::onOutOfMallocMemory(const AutoLockGC& lock) { decommitAllWithoutUnlocking(lock); } -void GCRuntime::minorGC(JS::gcreason::Reason reason, gcstats::PhaseKind phase) { +void GCRuntime::minorGC(JS::GCReason reason, gcstats::PhaseKind phase) { MOZ_ASSERT(!JS::RuntimeHeapIsBusy()); - MOZ_ASSERT_IF(reason == JS::gcreason::EVICT_NURSERY, + MOZ_ASSERT_IF(reason == JS::GCReason::EVICT_NURSERY, !rt->mainContextFromOwnThread()->suppressGC); if (rt->mainContextFromOwnThread()->suppressGC) { return; @@ -7861,7 +7855,7 @@ void GCRuntime::startBackgroundFreeAfterMinorGC() { JS::AutoDisableGenerationalGC::AutoDisableGenerationalGC(JSContext* cx) : cx(cx) { if (!cx->generationalDisabled) { - cx->runtime()->gc.evictNursery(JS::gcreason::API); + cx->runtime()->gc.evictNursery(JS::GCReason::API); cx->nursery().disable(); } ++cx->generationalDisabled; @@ -7885,11 +7879,11 @@ bool GCRuntime::gcIfRequested() { } if (majorGCRequested()) { - if (majorGCTriggerReason == JS::gcreason::DELAYED_ATOMS_GC && + if (majorGCTriggerReason == JS::GCReason::DELAYED_ATOMS_GC && !rt->mainContextFromOwnThread()->canCollectAtoms()) { // A GC was requested to collect the atoms zone, but it's no longer // possible. Skip this collection. - majorGCTriggerReason = JS::gcreason::NO_REASON; + majorGCTriggerReason = JS::GCReason::NO_REASON; return false; } @@ -7907,7 +7901,7 @@ bool GCRuntime::gcIfRequested() { void js::gc::FinishGC(JSContext* cx) { if (JS::IsIncrementalGCInProgress(cx)) { JS::PrepareForIncrementalGC(cx); - JS::FinishIncrementalGC(cx, JS::gcreason::API); + JS::FinishIncrementalGC(cx, JS::GCReason::API); } cx->runtime()->gc.waitBackgroundFreeEnd(); @@ -8188,7 +8182,7 @@ void GCRuntime::runDebugGC() { } if (hasZealMode(ZealMode::GenerationalGC)) { - return minorGC(JS::gcreason::DEBUG_GC); + return minorGC(JS::GCReason::DEBUG_GC); } PrepareForDebugGC(rt); @@ -8211,7 +8205,7 @@ void GCRuntime::runDebugGC() { if (!isIncrementalGCInProgress()) { invocationKind = GC_SHRINK; } - collect(false, budget, JS::gcreason::DEBUG_GC); + collect(false, budget, JS::GCReason::DEBUG_GC); /* Reset the slice size when we get to the sweep or compact phases. */ if ((initialState == State::Mark && incrementalState == State::Sweep) || @@ -8226,11 +8220,11 @@ void GCRuntime::runDebugGC() { if (!isIncrementalGCInProgress()) { invocationKind = GC_NORMAL; } - collect(false, budget, JS::gcreason::DEBUG_GC); + collect(false, budget, JS::GCReason::DEBUG_GC); } else if (hasZealMode(ZealMode::Compact)) { - gc(GC_SHRINK, JS::gcreason::DEBUG_GC); + gc(GC_SHRINK, JS::GCReason::DEBUG_GC); } else { - gc(GC_NORMAL, JS::gcreason::DEBUG_GC); + gc(GC_NORMAL, JS::GCReason::DEBUG_GC); } #endif @@ -8526,27 +8520,24 @@ JS_PUBLIC_API void JS::SkipZoneForGC(Zone* zone) { zone->unscheduleGC(); } JS_PUBLIC_API void JS::NonIncrementalGC(JSContext* cx, JSGCInvocationKind gckind, - gcreason::Reason reason) { + GCReason reason) { MOZ_ASSERT(gckind == GC_NORMAL || gckind == GC_SHRINK); cx->runtime()->gc.gc(gckind, reason); } JS_PUBLIC_API void JS::StartIncrementalGC(JSContext* cx, JSGCInvocationKind gckind, - gcreason::Reason reason, - int64_t millis) { + GCReason reason, int64_t millis) { MOZ_ASSERT(gckind == GC_NORMAL || gckind == GC_SHRINK); cx->runtime()->gc.startGC(gckind, reason, millis); } -JS_PUBLIC_API void JS::IncrementalGCSlice(JSContext* cx, - gcreason::Reason reason, +JS_PUBLIC_API void JS::IncrementalGCSlice(JSContext* cx, GCReason reason, int64_t millis) { cx->runtime()->gc.gcSlice(reason, millis); } -JS_PUBLIC_API void JS::FinishIncrementalGC(JSContext* cx, - gcreason::Reason reason) { +JS_PUBLIC_API void JS::FinishIncrementalGC(JSContext* cx, GCReason reason) { cx->runtime()->gc.finishGC(reason); } @@ -8921,7 +8912,7 @@ void AutoAssertEmptyNursery::checkCondition(JSContext* cx) { AutoEmptyNursery::AutoEmptyNursery(JSContext* cx) : AutoAssertEmptyNursery() { MOZ_ASSERT(!cx->suppressGC); cx->runtime()->gc.stats().suspendPhases(); - cx->runtime()->gc.evictNursery(JS::gcreason::EVICT_NURSERY); + cx->runtime()->gc.evictNursery(JS::GCReason::EVICT_NURSERY); cx->runtime()->gc.stats().resumePhases(); checkCondition(cx); } diff --git a/js/src/gc/GCInternals.h b/js/src/gc/GCInternals.h index a1a282725fedc..1a203480070b8 100644 --- a/js/src/gc/GCInternals.h +++ b/js/src/gc/GCInternals.h @@ -282,9 +282,9 @@ class MOZ_RAII AutoEmptyNursery : public AutoAssertEmptyNursery { extern void DelayCrossCompartmentGrayMarking(JSObject* src); -inline bool IsOOMReason(JS::gcreason::Reason reason) { - return reason == JS::gcreason::LAST_DITCH || - reason == JS::gcreason::MEM_PRESSURE; +inline bool IsOOMReason(JS::GCReason reason) { + return reason == JS::GCReason::LAST_DITCH || + reason == JS::GCReason::MEM_PRESSURE; } TenuredCell* AllocateCellInGC(JS::Zone* zone, AllocKind thingKind); diff --git a/js/src/gc/GCRuntime.h b/js/src/gc/GCRuntime.h index 8f96df8c871fd..d2924756af0cc 100644 --- a/js/src/gc/GCRuntime.h +++ b/js/src/gc/GCRuntime.h @@ -246,19 +246,19 @@ class GCRuntime { void resetParameter(JSGCParamKey key, AutoLockGC& lock); uint32_t getParameter(JSGCParamKey key, const AutoLockGC& lock); - MOZ_MUST_USE bool triggerGC(JS::gcreason::Reason reason); + MOZ_MUST_USE bool triggerGC(JS::GCReason reason); void maybeAllocTriggerZoneGC(Zone* zone, const AutoLockGC& lock); // The return value indicates if we were able to do the GC. - bool triggerZoneGC(Zone* zone, JS::gcreason::Reason reason, size_t usedBytes, + bool triggerZoneGC(Zone* zone, JS::GCReason reason, size_t usedBytes, size_t thresholdBytes); void maybeGC(Zone* zone); // The return value indicates whether a major GC was performed. bool gcIfRequested(); - void gc(JSGCInvocationKind gckind, JS::gcreason::Reason reason); - void startGC(JSGCInvocationKind gckind, JS::gcreason::Reason reason, + void gc(JSGCInvocationKind gckind, JS::GCReason reason); + void startGC(JSGCInvocationKind gckind, JS::GCReason reason, int64_t millis = 0); - void gcSlice(JS::gcreason::Reason reason, int64_t millis = 0); - void finishGC(JS::gcreason::Reason reason); + void gcSlice(JS::GCReason reason, int64_t millis = 0); + void finishGC(JS::GCReason reason); void abortGC(); void startDebugGC(JSGCInvocationKind gckind, SliceBudget& budget); void debugGCSlice(SliceBudget& budget); @@ -356,7 +356,7 @@ class GCRuntime { return false; } - if (!triggerGC(JS::gcreason::TOO_MUCH_MALLOC)) { + if (!triggerGC(JS::GCReason::TOO_MUCH_MALLOC)) { return false; } @@ -420,7 +420,7 @@ class GCRuntime { void setGrayBitsInvalid() { grayBitsValid = false; } bool majorGCRequested() const { - return majorGCTriggerReason != JS::gcreason::NO_REASON; + return majorGCTriggerReason != JS::GCReason::NO_REASON; } bool fullGCForAtomsRequested() const { return fullGCForAtomsRequested_; } @@ -550,10 +550,10 @@ class GCRuntime { bool wantBackgroundAllocation(const AutoLockGC& lock) const; bool startBackgroundAllocTaskIfIdle(); - void requestMajorGC(JS::gcreason::Reason reason); - SliceBudget defaultBudget(JS::gcreason::Reason reason, int64_t millis); + void requestMajorGC(JS::GCReason reason); + SliceBudget defaultBudget(JS::GCReason reason, int64_t millis); IncrementalResult budgetIncrementalGC(bool nonincrementalByAPI, - JS::gcreason::Reason reason, + JS::GCReason reason, SliceBudget& budget); IncrementalResult resetIncrementalGC(AbortReason reason); @@ -563,11 +563,11 @@ class GCRuntime { // Check if the system state is such that GC has been supressed // or otherwise delayed. - MOZ_MUST_USE bool checkIfGCAllowedInCurrentState(JS::gcreason::Reason reason); + MOZ_MUST_USE bool checkIfGCAllowedInCurrentState(JS::GCReason reason); gcstats::ZoneGCStats scanZonesBeforeGC(); void collect(bool nonincrementalByAPI, SliceBudget budget, - JS::gcreason::Reason reason) JS_HAZ_GC_CALL; + JS::GCReason reason) JS_HAZ_GC_CALL; /* * Run one GC "cycle" (either a slice of incremental GC or an entire @@ -580,9 +580,9 @@ class GCRuntime { */ MOZ_MUST_USE IncrementalResult gcCycle(bool nonincrementalByAPI, SliceBudget budget, - JS::gcreason::Reason reason); - bool shouldRepeatForDeadZone(JS::gcreason::Reason reason); - void incrementalSlice(SliceBudget& budget, JS::gcreason::Reason reason, + JS::GCReason reason); + bool shouldRepeatForDeadZone(JS::GCReason reason); + void incrementalSlice(SliceBudget& budget, JS::GCReason reason, AutoGCSession& session); MOZ_MUST_USE bool shouldCollectNurseryForSlice(bool nonincrementalByAPI, SliceBudget& budget); @@ -592,13 +592,11 @@ class GCRuntime { void pushZealSelectedObjects(); void purgeRuntime(); - MOZ_MUST_USE bool beginMarkPhase(JS::gcreason::Reason reason, - AutoGCSession& session); - bool prepareZonesForCollection(JS::gcreason::Reason reason, bool* isFullOut); + MOZ_MUST_USE bool beginMarkPhase(JS::GCReason reason, AutoGCSession& session); + bool prepareZonesForCollection(JS::GCReason reason, bool* isFullOut); bool shouldPreserveJITCode(JS::Realm* realm, const mozilla::TimeStamp& currentTime, - JS::gcreason::Reason reason, - bool canAllocateMoreCode); + JS::GCReason reason, bool canAllocateMoreCode); void startBackgroundFreeAfterMinorGC(); void traceRuntimeForMajorGC(JSTracer* trc, AutoGCSession& session); void traceRuntimeAtoms(JSTracer* trc, const AutoAccessAtomsZone& atomsAccess); @@ -618,8 +616,8 @@ class GCRuntime { void markAllWeakReferences(gcstats::PhaseKind phase); void markAllGrayReferences(gcstats::PhaseKind phase); - void beginSweepPhase(JS::gcreason::Reason reason, AutoGCSession& session); - void groupZonesForSweeping(JS::gcreason::Reason reason); + void beginSweepPhase(JS::GCReason reason, AutoGCSession& session); + void groupZonesForSweeping(JS::GCReason reason); MOZ_MUST_USE bool findInterZoneEdges(); void getNextSweepGroup(); IncrementalProgress markGrayReferencesInCurrentGroup(FreeOp* fop, @@ -655,13 +653,13 @@ class GCRuntime { void assertBackgroundSweepingFinished(); bool shouldCompact(); void beginCompactPhase(); - IncrementalProgress compactPhase(JS::gcreason::Reason reason, + IncrementalProgress compactPhase(JS::GCReason reason, SliceBudget& sliceBudget, AutoGCSession& session); void endCompactPhase(); void sweepTypesAfterCompacting(Zone* zone); void sweepZoneAfterCompacting(Zone* zone); - MOZ_MUST_USE bool relocateArenas(Zone* zone, JS::gcreason::Reason reason, + MOZ_MUST_USE bool relocateArenas(Zone* zone, JS::GCReason reason, Arena*& relocatedListOut, SliceBudget& sliceBudget); void updateTypeDescrObjects(MovingTracer* trc, Zone* zone); @@ -804,7 +802,7 @@ class GCRuntime { */ UnprotectedData grayBitsValid; - mozilla::Atomic majorGCTriggerReason; @@ -834,7 +832,7 @@ class GCRuntime { MainThreadData invocationKind; /* The initial GC reason, taken from the first slice. */ - MainThreadData initialReason; + MainThreadData initialReason; /* * The current incremental GC phase. This is also used internally in @@ -1041,10 +1039,10 @@ class GCRuntime { return stats().addressOfAllocsSinceMinorGCNursery(); } - void minorGC(JS::gcreason::Reason reason, + void minorGC(JS::GCReason reason, gcstats::PhaseKind phase = gcstats::PhaseKind::MINOR_GC) JS_HAZ_GC_CALL; - void evictNursery(JS::gcreason::Reason reason = JS::gcreason::EVICT_NURSERY) { + void evictNursery(JS::GCReason reason = JS::GCReason::EVICT_NURSERY) { minorGC(reason, gcstats::PhaseKind::EVICT_NURSERY); } diff --git a/js/src/gc/Nursery.cpp b/js/src/gc/Nursery.cpp index 2aa69a979efa2..5c5a26a6c53ae 100644 --- a/js/src/gc/Nursery.cpp +++ b/js/src/gc/Nursery.cpp @@ -112,7 +112,7 @@ js::Nursery::Nursery(JSRuntime* rt) enableProfiling_(false), canAllocateStrings_(false), reportTenurings_(0), - minorGCTriggerReason_(JS::gcreason::NO_REASON) + minorGCTriggerReason_(JS::GCReason::NO_REASON) #ifdef JS_GC_ZEAL , lastCanary_(nullptr) @@ -588,7 +588,7 @@ void js::Nursery::renderProfileJSON(JSONPrinter& json) const { return; } - if (previousGC.reason == JS::gcreason::NO_REASON) { + if (previousGC.reason == JS::GCReason::NO_REASON) { // If the nursery was empty when the last minorGC was requested, then // no nursery collection will have been performed but JSON may still be // requested. (And as a public API, this function should not crash in @@ -603,7 +603,7 @@ void js::Nursery::renderProfileJSON(JSONPrinter& json) const { json.property("status", "complete"); - json.property("reason", JS::gcreason::ExplainReason(previousGC.reason)); + json.property("reason", JS::ExplainGCReason(previousGC.reason)); json.property("bytes_tenured", previousGC.tenuredBytes); json.property("cells_tenured", previousGC.tenuredCells); json.property("strings_tenured", @@ -702,16 +702,16 @@ bool js::Nursery::needIdleTimeCollection() const { return minorGCRequested() || freeSpace() < threshold; } -static inline bool IsFullStoreBufferReason(JS::gcreason::Reason reason) { - return reason == JS::gcreason::FULL_WHOLE_CELL_BUFFER || - reason == JS::gcreason::FULL_GENERIC_BUFFER || - reason == JS::gcreason::FULL_VALUE_BUFFER || - reason == JS::gcreason::FULL_CELL_PTR_BUFFER || - reason == JS::gcreason::FULL_SLOT_BUFFER || - reason == JS::gcreason::FULL_SHAPE_BUFFER; +static inline bool IsFullStoreBufferReason(JS::GCReason reason) { + return reason == JS::GCReason::FULL_WHOLE_CELL_BUFFER || + reason == JS::GCReason::FULL_GENERIC_BUFFER || + reason == JS::GCReason::FULL_VALUE_BUFFER || + reason == JS::GCReason::FULL_CELL_PTR_BUFFER || + reason == JS::GCReason::FULL_SLOT_BUFFER || + reason == JS::GCReason::FULL_SHAPE_BUFFER; } -void js::Nursery::collect(JS::gcreason::Reason reason) { +void js::Nursery::collect(JS::GCReason reason) { JSRuntime* rt = runtime(); MOZ_ASSERT(!rt->mainContextFromOwnThread()->suppressGC); @@ -750,7 +750,7 @@ void js::Nursery::collect(JS::gcreason::Reason reason) { MOZ_ASSERT(!IsNurseryAllocable(AllocKind::OBJECT_GROUP)); TenureCountCache tenureCounts; - previousGC.reason = JS::gcreason::NO_REASON; + previousGC.reason = JS::GCReason::NO_REASON; if (!isEmpty()) { doCollection(reason, tenureCounts); } else { @@ -843,9 +843,9 @@ void js::Nursery::collect(JS::gcreason::Reason reason) { TimeDuration totalTime = profileDurations_[ProfileKey::Total]; rt->addTelemetry(JS_TELEMETRY_GC_MINOR_US, totalTime.ToMicroseconds()); - rt->addTelemetry(JS_TELEMETRY_GC_MINOR_REASON, reason); + rt->addTelemetry(JS_TELEMETRY_GC_MINOR_REASON, uint32_t(reason)); if (totalTime.ToMilliseconds() > 1.0) { - rt->addTelemetry(JS_TELEMETRY_GC_MINOR_REASON_LONG, reason); + rt->addTelemetry(JS_TELEMETRY_GC_MINOR_REASON_LONG, uint32_t(reason)); } rt->addTelemetry(JS_TELEMETRY_GC_NURSERY_BYTES, sizeOfHeapCommitted()); rt->addTelemetry(JS_TELEMETRY_GC_PRETENURE_COUNT, pretenureCount); @@ -859,8 +859,7 @@ void js::Nursery::collect(JS::gcreason::Reason reason) { stats().maybePrintProfileHeaders(); fprintf(stderr, "MinorGC: %20s %5.1f%% %4u ", - JS::gcreason::ExplainReason(reason), promotionRate * 100, - maxChunkCount()); + JS::ExplainGCReason(reason), promotionRate * 100, maxChunkCount()); printProfileDurations(profileDurations_); if (reportTenurings_) { @@ -875,7 +874,7 @@ void js::Nursery::collect(JS::gcreason::Reason reason) { } } -void js::Nursery::doCollection(JS::gcreason::Reason reason, +void js::Nursery::doCollection(JS::GCReason reason, TenureCountCache& tenureCounts) { JSRuntime* rt = runtime(); AutoGCSession session(rt, JS::HeapState::MinorCollecting); @@ -1119,7 +1118,7 @@ MOZ_ALWAYS_INLINE void js::Nursery::setStartPosition() { currentStartPosition_ = position(); } -void js::Nursery::maybeResizeNursery(JS::gcreason::Reason reason) { +void js::Nursery::maybeResizeNursery(JS::GCReason reason) { static const double GrowThreshold = 0.03; static const double ShrinkThreshold = 0.01; unsigned newMaxNurseryChunks; diff --git a/js/src/gc/Nursery.h b/js/src/gc/Nursery.h index cae18fdebfd30..d49b1e1580ba3 100644 --- a/js/src/gc/Nursery.h +++ b/js/src/gc/Nursery.h @@ -271,7 +271,7 @@ class Nursery { static const size_t MaxNurseryBufferSize = 1024; /* Do a minor collection. */ - void collect(JS::gcreason::Reason reason); + void collect(JS::GCReason reason); /* * If the thing at |*ref| in the Nursery has been forwarded, set |*ref| to @@ -352,16 +352,14 @@ class Nursery { return (void*)¤tStringEnd_; } - void requestMinorGC(JS::gcreason::Reason reason) const; + void requestMinorGC(JS::GCReason reason) const; bool minorGCRequested() const { - return minorGCTriggerReason_ != JS::gcreason::NO_REASON; - } - JS::gcreason::Reason minorGCTriggerReason() const { - return minorGCTriggerReason_; + return minorGCTriggerReason_ != JS::GCReason::NO_REASON; } + JS::GCReason minorGCTriggerReason() const { return minorGCTriggerReason_; } void clearMinorGCRequest() { - minorGCTriggerReason_ = JS::gcreason::NO_REASON; + minorGCTriggerReason_ = JS::GCReason::NO_REASON; } bool needIdleTimeCollection() const; @@ -442,7 +440,7 @@ class Nursery { * mutable as it is set by the store buffer, which otherwise cannot modify * anything in the nursery. */ - mutable JS::gcreason::Reason minorGCTriggerReason_; + mutable JS::GCReason minorGCTriggerReason_; /* Profiling data. */ @@ -465,7 +463,7 @@ class Nursery { ProfileDurations totalDurations_; struct { - JS::gcreason::Reason reason = JS::gcreason::NO_REASON; + JS::GCReason reason = JS::GCReason::NO_REASON; size_t nurseryCapacity = 0; size_t nurseryLazyCapacity = 0; size_t nurseryUsedBytes = 0; @@ -562,8 +560,7 @@ class Nursery { /* Common internal allocator function. */ void* allocate(size_t size); - void doCollection(JS::gcreason::Reason reason, - gc::TenureCountCache& tenureCounts); + void doCollection(JS::GCReason reason, gc::TenureCountCache& tenureCounts); /* * Move the object at |src| in the Nursery to an already-allocated cell @@ -600,7 +597,7 @@ class Nursery { void sweepMapAndSetObjects(); /* Change the allocable space provided by the nursery. */ - void maybeResizeNursery(JS::gcreason::Reason reason); + void maybeResizeNursery(JS::GCReason reason); void growAllocableSpace(); void shrinkAllocableSpace(unsigned newCount); void minimizeAllocableSpace(); diff --git a/js/src/gc/Scheduling.h b/js/src/gc/Scheduling.h index baa2d41470a04..c2e8eef5ba8a9 100644 --- a/js/src/gc/Scheduling.h +++ b/js/src/gc/Scheduling.h @@ -90,13 +90,13 @@ * * While code generally takes the above factors into account in only an ad-hoc * fashion, the API forces the user to pick a "reason" for the GC. We have a - * bunch of JS::gcreason reasons in GCAPI.h. These fall into a few categories + * bunch of JS::GCReason reasons in GCAPI.h. These fall into a few categories * that generally coincide with one or more of the above factors. * * Embedding reasons: * * 1) Do a GC now because the embedding knows something useful about the - * zone's memory retention state. These are gcreasons like LOAD_END, + * zone's memory retention state. These are GCReasons like LOAD_END, * PAGE_HIDE, SET_NEW_DOCUMENT, DOM_UTILS. Mostly, Gecko uses these to * indicate that a significant fraction of the scheduled zone's memory is * probably reclaimable. diff --git a/js/src/gc/Statistics.cpp b/js/src/gc/Statistics.cpp index 1f5eeccf693ba..b20a8c36ef66a 100644 --- a/js/src/gc/Statistics.cpp +++ b/js/src/gc/Statistics.cpp @@ -40,8 +40,8 @@ using mozilla::TimeStamp; * larger-numbered reasons to pile up in the last telemetry bucket, or switch * to GC_REASON_3 and bump the max value. */ -JS_STATIC_ASSERT(JS::gcreason::NUM_TELEMETRY_REASONS >= - JS::gcreason::NUM_REASONS); +JS_STATIC_ASSERT(JS::GCReason::NUM_TELEMETRY_REASONS >= + JS::GCReason::NUM_REASONS); using PhaseKindRange = decltype(mozilla::MakeEnumeratedRange(PhaseKind::FIRST, PhaseKind::LIMIT)); @@ -64,11 +64,10 @@ const char* js::gcstats::ExplainInvocationKind(JSGCInvocationKind gckind) { } } -JS_PUBLIC_API const char* JS::gcreason::ExplainReason( - JS::gcreason::Reason reason) { +JS_PUBLIC_API const char* JS::ExplainGCReason(JS::GCReason reason) { switch (reason) { #define SWITCH_REASON(name, _) \ - case JS::gcreason::name: \ + case JS::GCReason::name: \ return #name; GCREASONS(SWITCH_REASON) @@ -279,7 +278,8 @@ UniqueChars Statistics::formatCompactSliceMessage() const { "%s%s; Times: "; char buffer[1024]; SprintfLiteral(buffer, format, index, t(slice.duration()), budgetDescription, - t(slice.start - slices_[0].start), ExplainReason(slice.reason), + t(slice.start - slices_[0].start), + ExplainGCReason(slice.reason), slice.wasReset() ? "yes - " : "no", slice.wasReset() ? ExplainAbortReason(slice.resetReason) : ""); @@ -442,7 +442,7 @@ UniqueChars Statistics::formatDetailedDescription() const { char buffer[1024]; SprintfLiteral( buffer, format, ExplainInvocationKind(gckind), - ExplainReason(slices_[0].reason), nonincremental() ? "no - " : "yes", + ExplainGCReason(slices_[0].reason), nonincremental() ? "no - " : "yes", nonincremental() ? ExplainAbortReason(nonincrementalReason_) : "", zoneStats.collectedZoneCount, zoneStats.zoneCount, zoneStats.sweptZoneCount, zoneStats.collectedCompartmentCount, @@ -475,7 +475,7 @@ UniqueChars Statistics::formatDetailedSliceDescription( "; char buffer[1024]; SprintfLiteral( - buffer, format, i, ExplainReason(slice.reason), + buffer, format, i, ExplainGCReason(slice.reason), slice.wasReset() ? "yes - " : "no", slice.wasReset() ? ExplainAbortReason(slice.resetReason) : "", gc::StateName(slice.initialState), gc::StateName(slice.finalState), @@ -650,7 +650,7 @@ void Statistics::formatJsonDescription(uint64_t timestamp, json.property("total_time", total, JSONPrinter::MILLISECONDS); // #4 // We might be able to omit reason if perf.html was able to retrive it // from the first slice. But it doesn't do this yet. - json.property("reason", ExplainReason(slices_[0].reason)); // #5 + json.property("reason", ExplainGCReason(slices_[0].reason)); // #5 json.property("zones_collected", zoneStats.collectedZoneCount); // #6 json.property("total_zones", zoneStats.zoneCount); // #7 json.property("total_compartments", zoneStats.compartmentCount); // #8 @@ -713,7 +713,7 @@ void Statistics::formatJsonSliceDescription(unsigned i, const SliceData& slice, json.property("slice", i); // JSON Property #1 json.property("pause", slice.duration(), JSONPrinter::MILLISECONDS); // #2 - json.property("reason", ExplainReason(slice.reason)); // #3 + json.property("reason", ExplainGCReason(slice.reason)); // #3 json.property("initial_state", gc::StateName(slice.initialState)); // #4 json.property("final_state", gc::StateName(slice.finalState)); // #5 json.property("budget", budgetDescription); // #6 @@ -1024,7 +1024,7 @@ void Statistics::endGC() { thresholdTriggered = false; } -void Statistics::beginNurseryCollection(JS::gcreason::Reason reason) { +void Statistics::beginNurseryCollection(JS::GCReason reason) { count(COUNT_MINOR_GC); startingMinorGCNumber = runtime->gc.minorGCCount(); if (nurseryCollectionCallback) { @@ -1034,7 +1034,7 @@ void Statistics::beginNurseryCollection(JS::gcreason::Reason reason) { } } -void Statistics::endNurseryCollection(JS::gcreason::Reason reason) { +void Statistics::endNurseryCollection(JS::GCReason reason) { if (nurseryCollectionCallback) { (*nurseryCollectionCallback)( runtime->mainContextFromOwnThread(), @@ -1046,7 +1046,7 @@ void Statistics::endNurseryCollection(JS::gcreason::Reason reason) { void Statistics::beginSlice(const ZoneGCStats& zoneStats, JSGCInvocationKind gckind, SliceBudget budget, - JS::gcreason::Reason reason) { + JS::GCReason reason) { MOZ_ASSERT(phaseStack.empty() || (phaseStack.length() == 1 && phaseStack[0] == Phase::MUTATOR)); @@ -1064,7 +1064,7 @@ void Statistics::beginSlice(const ZoneGCStats& zoneStats, return; } - runtime->addTelemetry(JS_TELEMETRY_GC_REASON, reason); + runtime->addTelemetry(JS_TELEMETRY_GC_REASON, uint32_t(reason)); // Slice callbacks should only fire for the outermost level. bool wasFullGC = zoneStats.isFullCollection(); @@ -1485,7 +1485,7 @@ void Statistics::printSliceProfile() { bool full = zoneStats.isFullCollection(); fprintf(stderr, "MajorGC: %20s %1d -> %1d %1s%1s%1s%1s ", - ExplainReason(slice.reason), int(slice.initialState), + ExplainGCReason(slice.reason), int(slice.initialState), int(slice.finalState), full ? "F" : "", shrinking ? "S" : "", nonIncremental ? "N" : "", reset ? "R" : ""); diff --git a/js/src/gc/Statistics.h b/js/src/gc/Statistics.h index 4f48ac26caac2..28e1cc4d3f783 100644 --- a/js/src/gc/Statistics.h +++ b/js/src/gc/Statistics.h @@ -163,7 +163,7 @@ struct Statistics { void resumePhases(); void beginSlice(const ZoneGCStats& zoneStats, JSGCInvocationKind gckind, - SliceBudget budget, JS::gcreason::Reason reason); + SliceBudget budget, JS::GCReason reason); void endSlice(); MOZ_MUST_USE bool startTimingMutator(); @@ -223,8 +223,8 @@ struct Statistics { return &allocsSinceMinorGC.nursery; } - void beginNurseryCollection(JS::gcreason::Reason reason); - void endNurseryCollection(JS::gcreason::Reason reason); + void beginNurseryCollection(JS::GCReason reason); + void endNurseryCollection(JS::GCReason reason); TimeStamp beginSCC(); void endSCC(unsigned scc, TimeStamp start); @@ -245,7 +245,7 @@ struct Statistics { static const size_t MAX_SUSPENDED_PHASES = MAX_PHASE_NESTING * 3; struct SliceData { - SliceData(SliceBudget budget, JS::gcreason::Reason reason, TimeStamp start, + SliceData(SliceBudget budget, JS::GCReason reason, TimeStamp start, size_t startFaults, gc::State initialState) : budget(budget), reason(reason), @@ -257,7 +257,7 @@ struct Statistics { endFaults(0) {} SliceBudget budget; - JS::gcreason::Reason reason; + JS::GCReason reason; gc::State initialState, finalState; gc::AbortReason resetReason; TimeStamp start, end; @@ -460,7 +460,7 @@ struct Statistics { struct MOZ_RAII AutoGCSlice { AutoGCSlice(Statistics& stats, const ZoneGCStats& zoneStats, JSGCInvocationKind gckind, SliceBudget budget, - JS::gcreason::Reason reason) + JS::GCReason reason) : stats(stats) { stats.beginSlice(zoneStats, gckind, budget, reason); } diff --git a/js/src/gc/StoreBuffer.cpp b/js/src/gc/StoreBuffer.cpp index e81443d6aab85..1f67d95bbd5af 100644 --- a/js/src/gc/StoreBuffer.cpp +++ b/js/src/gc/StoreBuffer.cpp @@ -81,7 +81,7 @@ void StoreBuffer::clear() { bufferGeneric.clear(); } -void StoreBuffer::setAboutToOverflow(JS::gcreason::Reason reason) { +void StoreBuffer::setAboutToOverflow(JS::GCReason reason) { if (!aboutToOverflow_) { aboutToOverflow_ = true; runtime_->gc.stats().count(gcstats::COUNT_STOREBUFFER_OVERFLOW); @@ -132,7 +132,7 @@ ArenaCellSet* StoreBuffer::WholeCellBuffer::allocateCellSet(Arena* arena) { if (isAboutToOverflow()) { rt->gc.storeBuffer().setAboutToOverflow( - JS::gcreason::FULL_WHOLE_CELL_BUFFER); + JS::GCReason::FULL_WHOLE_CELL_BUFFER); } return cells; diff --git a/js/src/gc/StoreBuffer.h b/js/src/gc/StoreBuffer.h index e588187520737..ce80631bd5340 100644 --- a/js/src/gc/StoreBuffer.h +++ b/js/src/gc/StoreBuffer.h @@ -237,7 +237,7 @@ class StoreBuffer { } if (isAboutToOverflow()) { - owner->setAboutToOverflow(JS::gcreason::FULL_GENERIC_BUFFER); + owner->setAboutToOverflow(JS::GCReason::FULL_GENERIC_BUFFER); } } @@ -292,7 +292,7 @@ class StoreBuffer { typedef PointerEdgeHasher Hasher; - static const auto FullBufferReason = JS::gcreason::FULL_CELL_PTR_BUFFER; + static const auto FullBufferReason = JS::GCReason::FULL_CELL_PTR_BUFFER; }; struct ValueEdge { @@ -327,7 +327,7 @@ class StoreBuffer { typedef PointerEdgeHasher Hasher; - static const auto FullBufferReason = JS::gcreason::FULL_VALUE_BUFFER; + static const auto FullBufferReason = JS::GCReason::FULL_VALUE_BUFFER; }; struct SlotsEdge { @@ -410,7 +410,7 @@ class StoreBuffer { static bool match(const SlotsEdge& k, const Lookup& l) { return k == l; } } Hasher; - static const auto FullBufferReason = JS::gcreason::FULL_SLOT_BUFFER; + static const auto FullBufferReason = JS::GCReason::FULL_SLOT_BUFFER; }; template @@ -520,7 +520,7 @@ class StoreBuffer { void traceGenericEntries(JSTracer* trc) { bufferGeneric.trace(this, trc); } /* For use by our owned buffers and for testing. */ - void setAboutToOverflow(JS::gcreason::Reason); + void setAboutToOverflow(JS::GCReason); void addSizeOfExcludingThis(mozilla::MallocSizeOf mallocSizeOf, JS::GCSizes* sizes); diff --git a/js/src/gc/Zone.cpp b/js/src/gc/Zone.cpp index 9994aec574a23..ca643b30ed5fb 100644 --- a/js/src/gc/Zone.cpp +++ b/js/src/gc/Zone.cpp @@ -489,7 +489,7 @@ void JS::Zone::maybeTriggerGCForTooMuchMalloc(js::gc::MemoryCounter& counter, return; } - if (!rt->gc.triggerZoneGC(this, JS::gcreason::TOO_MUCH_MALLOC, + if (!rt->gc.triggerZoneGC(this, JS::GCReason::TOO_MUCH_MALLOC, counter.bytes(), counter.maxBytes())) { return; } diff --git a/js/src/jsapi-tests/testBinASTReader.cpp b/js/src/jsapi-tests/testBinASTReader.cpp index 39e5a93798414..ae336f786414c 100644 --- a/js/src/jsapi-tests/testBinASTReader.cpp +++ b/js/src/jsapi-tests/testBinASTReader.cpp @@ -245,7 +245,7 @@ void runTestFromPath(JSContext* cx, const char* path) { // running everything from the same cx and without returning to JS, there // is nothing to deallocate the ASTs. JS::PrepareForFullGC(cx); - cx->runtime()->gc.gc(GC_NORMAL, JS::gcreason::NO_REASON); + cx->runtime()->gc.gc(GC_NORMAL, JS::GCReason::NO_REASON); } LifoAllocScope allocScope(&cx->tempLifoAlloc()); diff --git a/js/src/jsapi-tests/testErrorInterceptorGC.cpp b/js/src/jsapi-tests/testErrorInterceptorGC.cpp index 209db1cb1f0e5..83ceecdb1a44c 100644 --- a/js/src/jsapi-tests/testErrorInterceptorGC.cpp +++ b/js/src/jsapi-tests/testErrorInterceptorGC.cpp @@ -8,7 +8,7 @@ namespace { struct ErrorInterceptorWithGC : JSErrorInterceptor { void interceptError(JSContext* cx, JS::HandleValue val) override { JS::PrepareForFullGC(cx); - JS::NonIncrementalGC(cx, GC_SHRINK, JS::gcreason::DEBUG_GC); + JS::NonIncrementalGC(cx, GC_SHRINK, JS::GCReason::DEBUG_GC); } }; diff --git a/js/src/jsapi-tests/testGCFinalizeCallback.cpp b/js/src/jsapi-tests/testGCFinalizeCallback.cpp index c49fdf60e5415..cac9e8d6f73e5 100644 --- a/js/src/jsapi-tests/testGCFinalizeCallback.cpp +++ b/js/src/jsapi-tests/testGCFinalizeCallback.cpp @@ -21,10 +21,10 @@ BEGIN_TEST(testGCFinalizeCallback) { /* Full GC, incremental. */ FinalizeCalls = 0; JS::PrepareForFullGC(cx); - JS::StartIncrementalGC(cx, GC_NORMAL, JS::gcreason::API, 1000000); + JS::StartIncrementalGC(cx, GC_NORMAL, JS::GCReason::API, 1000000); while (cx->runtime()->gc.isIncrementalGCInProgress()) { JS::PrepareForFullGC(cx); - JS::IncrementalGCSlice(cx, JS::gcreason::API, 1000000); + JS::IncrementalGCSlice(cx, JS::GCReason::API, 1000000); } CHECK(!cx->runtime()->gc.isIncrementalGCInProgress()); CHECK(cx->runtime()->gc.isFullGc()); @@ -47,7 +47,7 @@ BEGIN_TEST(testGCFinalizeCallback) { /* Zone GC, non-incremental, single zone. */ FinalizeCalls = 0; JS::PrepareZoneForGC(global1->zone()); - JS::NonIncrementalGC(cx, GC_NORMAL, JS::gcreason::API); + JS::NonIncrementalGC(cx, GC_NORMAL, JS::GCReason::API); CHECK(!cx->runtime()->gc.isFullGc()); CHECK(checkSingleGroup()); CHECK(checkFinalizeStatus()); @@ -57,7 +57,7 @@ BEGIN_TEST(testGCFinalizeCallback) { JS::PrepareZoneForGC(global1->zone()); JS::PrepareZoneForGC(global2->zone()); JS::PrepareZoneForGC(global3->zone()); - JS::NonIncrementalGC(cx, GC_NORMAL, JS::gcreason::API); + JS::NonIncrementalGC(cx, GC_NORMAL, JS::GCReason::API); CHECK(!cx->runtime()->gc.isFullGc()); CHECK(checkSingleGroup()); CHECK(checkFinalizeStatus()); @@ -65,10 +65,10 @@ BEGIN_TEST(testGCFinalizeCallback) { /* Zone GC, incremental, single zone. */ FinalizeCalls = 0; JS::PrepareZoneForGC(global1->zone()); - JS::StartIncrementalGC(cx, GC_NORMAL, JS::gcreason::API, 1000000); + JS::StartIncrementalGC(cx, GC_NORMAL, JS::GCReason::API, 1000000); while (cx->runtime()->gc.isIncrementalGCInProgress()) { JS::PrepareZoneForGC(global1->zone()); - JS::IncrementalGCSlice(cx, JS::gcreason::API, 1000000); + JS::IncrementalGCSlice(cx, JS::GCReason::API, 1000000); } CHECK(!cx->runtime()->gc.isIncrementalGCInProgress()); CHECK(!cx->runtime()->gc.isFullGc()); @@ -80,12 +80,12 @@ BEGIN_TEST(testGCFinalizeCallback) { JS::PrepareZoneForGC(global1->zone()); JS::PrepareZoneForGC(global2->zone()); JS::PrepareZoneForGC(global3->zone()); - JS::StartIncrementalGC(cx, GC_NORMAL, JS::gcreason::API, 1000000); + JS::StartIncrementalGC(cx, GC_NORMAL, JS::GCReason::API, 1000000); while (cx->runtime()->gc.isIncrementalGCInProgress()) { JS::PrepareZoneForGC(global1->zone()); JS::PrepareZoneForGC(global2->zone()); JS::PrepareZoneForGC(global3->zone()); - JS::IncrementalGCSlice(cx, JS::gcreason::API, 1000000); + JS::IncrementalGCSlice(cx, JS::GCReason::API, 1000000); } CHECK(!cx->runtime()->gc.isIncrementalGCInProgress()); CHECK(!cx->runtime()->gc.isFullGc()); diff --git a/js/src/jsapi-tests/testGCGrayMarking.cpp b/js/src/jsapi-tests/testGCGrayMarking.cpp index 1711a3e299eb4..d092d0d653f89 100644 --- a/js/src/jsapi-tests/testGCGrayMarking.cpp +++ b/js/src/jsapi-tests/testGCGrayMarking.cpp @@ -464,7 +464,7 @@ bool TestCCWs() { CHECK(GetCrossCompartmentWrapper(target) == wrapper); CHECK(IsMarkedBlack(wrapper)); - JS::FinishIncrementalGC(cx, JS::gcreason::API); + JS::FinishIncrementalGC(cx, JS::GCReason::API); // Test behaviour of gray CCWs marked black by a barrier during incremental // GC. @@ -500,7 +500,7 @@ bool TestCCWs() { CHECK(!JS::ObjectIsMarkedGray(target)); // Final state: source and target are black. - JS::FinishIncrementalGC(cx, JS::gcreason::API); + JS::FinishIncrementalGC(cx, JS::GCReason::API); CHECK(IsMarkedBlack(wrapper)); CHECK(IsMarkedBlack(target)); @@ -739,7 +739,7 @@ bool ZoneGC(JS::Zone* zone) { uint32_t oldMode = JS_GetGCParameter(cx, JSGC_MODE); JS_SetGCParameter(cx, JSGC_MODE, JSGC_MODE_ZONE); JS::PrepareZoneForGC(zone); - cx->runtime()->gc.gc(GC_NORMAL, JS::gcreason::API); + cx->runtime()->gc.gc(GC_NORMAL, JS::GCReason::API); CHECK(!cx->runtime()->gc.isFullGc()); JS_SetGCParameter(cx, JSGC_MODE, oldMode); return true; diff --git a/js/src/jsapi-tests/testGCHeapPostBarriers.cpp b/js/src/jsapi-tests/testGCHeapPostBarriers.cpp index 8471d3b755f69..8bcee2620c695 100644 --- a/js/src/jsapi-tests/testGCHeapPostBarriers.cpp +++ b/js/src/jsapi-tests/testGCHeapPostBarriers.cpp @@ -131,7 +131,7 @@ bool TestHeapPostBarrierUpdate() { ptr = testStruct.release(); } - cx->minorGC(JS::gcreason::API); + cx->minorGC(JS::GCReason::API); W& wrapper = ptr->wrapper; CHECK(uintptr_t(wrapper.get()) != initialObjAsInt); @@ -140,7 +140,7 @@ bool TestHeapPostBarrierUpdate() { JS::DeletePolicy>()(ptr); - cx->minorGC(JS::gcreason::API); + cx->minorGC(JS::GCReason::API); return true; } @@ -166,7 +166,7 @@ bool TestHeapPostBarrierInitFailure() { // testStruct deleted here, as if we left this block due to an error. } - cx->minorGC(JS::gcreason::API); + cx->minorGC(JS::GCReason::API); return true; } diff --git a/js/src/jsapi-tests/testGCHooks.cpp b/js/src/jsapi-tests/testGCHooks.cpp index 93160f4d5561c..658bb15b888fc 100644 --- a/js/src/jsapi-tests/testGCHooks.cpp +++ b/js/src/jsapi-tests/testGCHooks.cpp @@ -20,7 +20,7 @@ static void NonIncrementalGCSliceCallback(JSContext* cx, MOZ_RELEASE_ASSERT(progress == expect[gSliceCallbackCount++]); MOZ_RELEASE_ASSERT(desc.isZone_ == false); MOZ_RELEASE_ASSERT(desc.invocationKind_ == GC_NORMAL); - MOZ_RELEASE_ASSERT(desc.reason_ == JS::gcreason::API); + MOZ_RELEASE_ASSERT(desc.reason_ == JS::GCReason::API); if (progress == GC_CYCLE_END) { mozilla::UniquePtr summary(desc.formatSummaryMessage(cx)); mozilla::UniquePtr message(desc.formatSliceMessage(cx)); @@ -41,16 +41,17 @@ END_TEST(testGCSliceCallback) static void RootsRemovedGCSliceCallback(JSContext* cx, JS::GCProgress progress, const JS::GCDescription& desc) { using namespace JS; - using namespace JS::gcreason; static GCProgress expectProgress[] = { GC_CYCLE_BEGIN, GC_SLICE_BEGIN, GC_SLICE_END, GC_SLICE_BEGIN, GC_SLICE_END, GC_CYCLE_END, GC_CYCLE_BEGIN, GC_SLICE_BEGIN, GC_SLICE_END, GC_CYCLE_END}; - static Reason expectReasons[] = { - DEBUG_GC, DEBUG_GC, DEBUG_GC, DEBUG_GC, DEBUG_GC, - DEBUG_GC, ROOTS_REMOVED, ROOTS_REMOVED, ROOTS_REMOVED, ROOTS_REMOVED}; + static GCReason expectReasons[] = { + GCReason::DEBUG_GC, GCReason::DEBUG_GC, GCReason::DEBUG_GC, + GCReason::DEBUG_GC, GCReason::DEBUG_GC, GCReason::DEBUG_GC, + GCReason::ROOTS_REMOVED, GCReason::ROOTS_REMOVED, GCReason::ROOTS_REMOVED, + GCReason::ROOTS_REMOVED}; static_assert( mozilla::ArrayLength(expectProgress) == @@ -87,7 +88,7 @@ BEGIN_TEST(testGCRootsRemoved) { // Trigger another GC after the current one in shrinking / shutdown GCs. cx->runtime()->gc.notifyRootsRemoved(); - JS::FinishIncrementalGC(cx, JS::gcreason::DEBUG_GC); + JS::FinishIncrementalGC(cx, JS::GCReason::DEBUG_GC); CHECK(!JS::IsIncrementalGCInProgress(cx)); JS::SetGCSliceCallback(cx, nullptr); diff --git a/js/src/jsapi-tests/testGCMarking.cpp b/js/src/jsapi-tests/testGCMarking.cpp index e57f86b3136ae..b3cfa63ae911e 100644 --- a/js/src/jsapi-tests/testGCMarking.cpp +++ b/js/src/jsapi-tests/testGCMarking.cpp @@ -341,7 +341,7 @@ BEGIN_TEST(testIncrementalRoots) { // Tenure everything so intentionally unrooted objects don't move before we // can use them. - cx->runtime()->gc.minorGC(JS::gcreason::API); + cx->runtime()->gc.minorGC(JS::GCReason::API); // Release all roots except for the AutoObjectVector. obj = root = nullptr; diff --git a/js/src/jsapi-tests/testGCUniqueId.cpp b/js/src/jsapi-tests/testGCUniqueId.cpp index ca9012e8da01e..20ccb6fb88f88 100644 --- a/js/src/jsapi-tests/testGCUniqueId.cpp +++ b/js/src/jsapi-tests/testGCUniqueId.cpp @@ -110,7 +110,7 @@ BEGIN_TEST(testGCUID) { // Force a compaction to move the object and check that the uid moved to // the new tenured heap location. JS::PrepareForFullGC(cx); - JS::NonIncrementalGC(cx, GC_SHRINK, JS::gcreason::API); + JS::NonIncrementalGC(cx, GC_SHRINK, JS::GCReason::API); // There's a very low probability that this check could fail, but it is // possible. If it becomes an annoying intermittent then we should make diff --git a/js/src/jsapi-tests/testGCWeakCache.cpp b/js/src/jsapi-tests/testGCWeakCache.cpp index 43bf7813dff6c..d2c985c669281 100644 --- a/js/src/jsapi-tests/testGCWeakCache.cpp +++ b/js/src/jsapi-tests/testGCWeakCache.cpp @@ -249,7 +249,7 @@ bool SweepCacheAndFinishGC(JSContext* cx, const Cache& cache) { CHECK(IsIncrementalGCInProgress(cx)); PrepareForIncrementalGC(cx); - IncrementalGCSlice(cx, JS::gcreason::API); + IncrementalGCSlice(cx, JS::GCReason::API); JS::Zone* zone = JS::GetObjectZone(global); CHECK(!IsIncrementalGCInProgress(cx)); diff --git a/js/src/jsapi-tests/testGCWeakRef.cpp b/js/src/jsapi-tests/testGCWeakRef.cpp index f3c3485d24f6a..ae3fa5d652467 100644 --- a/js/src/jsapi-tests/testGCWeakRef.cpp +++ b/js/src/jsapi-tests/testGCWeakRef.cpp @@ -28,7 +28,7 @@ BEGIN_TEST(testGCWeakRef) { JS::Rooted heap(cx, MyHeap(obj)); obj = nullptr; - cx->runtime()->gc.minorGC(JS::gcreason::API); + cx->runtime()->gc.minorGC(JS::GCReason::API); // The minor collection should have treated the weak ref as a strong ref, // so the object should still be live, despite not having any other live diff --git a/js/src/jsapi-tests/testPreserveJitCode.cpp b/js/src/jsapi-tests/testPreserveJitCode.cpp index f44e98d216167..4b04c7a732cd4 100644 --- a/js/src/jsapi-tests/testPreserveJitCode.cpp +++ b/js/src/jsapi-tests/testPreserveJitCode.cpp @@ -75,10 +75,10 @@ bool testPreserveJitCode(bool preserveJitCode, unsigned remainingIonScripts) { CHECK_EQUAL(value.toInt32(), 45); CHECK_EQUAL(countIonScripts(global), 1u); - NonIncrementalGC(cx, GC_NORMAL, gcreason::API); + NonIncrementalGC(cx, GC_NORMAL, GCReason::API); CHECK_EQUAL(countIonScripts(global), remainingIonScripts); - NonIncrementalGC(cx, GC_SHRINK, gcreason::API); + NonIncrementalGC(cx, GC_SHRINK, GCReason::API); CHECK_EQUAL(countIonScripts(global), 0u); return true; diff --git a/js/src/jsapi-tests/tests.h b/js/src/jsapi-tests/tests.h index 0745df9869c95..aa0d12470216d 100644 --- a/js/src/jsapi-tests/tests.h +++ b/js/src/jsapi-tests/tests.h @@ -498,7 +498,7 @@ class AutoLeaveZeal { JS_GetGCZealBits(cx_, &zealBits_, &frequency_, &dummy); JS_SetGCZeal(cx_, 0, 0); JS::PrepareForFullGC(cx_); - JS::NonIncrementalGC(cx_, GC_SHRINK, JS::gcreason::DEBUG_GC); + JS::NonIncrementalGC(cx_, GC_SHRINK, JS::GCReason::DEBUG_GC); } ~AutoLeaveZeal() { JS_SetGCZeal(cx_, 0, 0); diff --git a/js/src/jsapi.cpp b/js/src/jsapi.cpp index d7db7fa919f17..65fa37a5b8446 100644 --- a/js/src/jsapi.cpp +++ b/js/src/jsapi.cpp @@ -1167,14 +1167,14 @@ JS_PUBLIC_API bool JS::IsIdleGCTaskNeeded(JSRuntime* rt) { JS_PUBLIC_API void JS::RunIdleTimeGCTask(JSRuntime* rt) { gc::GCRuntime& gc = rt->gc; if (gc.nursery().needIdleTimeCollection()) { - gc.minorGC(JS::gcreason::IDLE_TIME_COLLECTION); + gc.minorGC(JS::GCReason::IDLE_TIME_COLLECTION); } } JS_PUBLIC_API void JS_GC(JSContext* cx) { AssertHeapIsIdle(); JS::PrepareForFullGC(cx); - cx->runtime()->gc.gc(GC_NORMAL, JS::gcreason::API); + cx->runtime()->gc.gc(GC_NORMAL, JS::GCReason::API); } JS_PUBLIC_API void JS_MaybeGC(JSContext* cx) { diff --git a/js/src/jsfriendapi.cpp b/js/src/jsfriendapi.cpp index c6aa59205c085..cf24147e1fb6a 100644 --- a/js/src/jsfriendapi.cpp +++ b/js/src/jsfriendapi.cpp @@ -1118,7 +1118,7 @@ void DumpHeapTracer::onChild(const JS::GCCellPtr& thing) { void js::DumpHeap(JSContext* cx, FILE* fp, js::DumpHeapNurseryBehaviour nurseryBehaviour) { if (nurseryBehaviour == js::CollectNurseryBeforeDump) { - cx->runtime()->gc.evictNursery(JS::gcreason::API); + cx->runtime()->gc.evictNursery(JS::GCReason::API); } DumpHeapTracer dtrc(fp, cx); diff --git a/js/src/shell/js.cpp b/js/src/shell/js.cpp index f2412e3cddad5..88cb608f475bd 100644 --- a/js/src/shell/js.cpp +++ b/js/src/shell/js.cpp @@ -1840,7 +1840,7 @@ static void my_LargeAllocFailCallback() { MOZ_ASSERT(!JS::RuntimeHeapIsBusy()); JS::PrepareForFullGC(cx); - cx->runtime()->gc.gc(GC_NORMAL, JS::gcreason::SHARED_MEMORY_LIMIT); + cx->runtime()->gc.gc(GC_NORMAL, JS::GCReason::SHARED_MEMORY_LIMIT); } static const uint32_t CacheEntry_SOURCE = 0; diff --git a/js/src/vm/ArrayBufferObject.cpp b/js/src/vm/ArrayBufferObject.cpp index a56f0016fe481..510bc56db86c2 100644 --- a/js/src/vm/ArrayBufferObject.cpp +++ b/js/src/vm/ArrayBufferObject.cpp @@ -846,12 +846,12 @@ static bool CreateBuffer( // See MaximumLiveMappedBuffers comment above. if (liveBufferCount > StartSyncFullGCAtLiveBufferCount) { JS::PrepareForFullGC(cx); - JS::NonIncrementalGC(cx, GC_NORMAL, JS::gcreason::TOO_MUCH_WASM_MEMORY); + JS::NonIncrementalGC(cx, GC_NORMAL, JS::GCReason::TOO_MUCH_WASM_MEMORY); allocatedSinceLastTrigger = 0; } else if (liveBufferCount > StartTriggeringAtLiveBufferCount) { allocatedSinceLastTrigger++; if (allocatedSinceLastTrigger > AllocatedBuffersPerTrigger) { - Unused << cx->runtime()->gc.triggerGC(JS::gcreason::TOO_MUCH_WASM_MEMORY); + Unused << cx->runtime()->gc.triggerGC(JS::GCReason::TOO_MUCH_WASM_MEMORY); allocatedSinceLastTrigger = 0; } } else { diff --git a/js/src/vm/Debugger.cpp b/js/src/vm/Debugger.cpp index cad4f8ba4dea7..6af7f304c6af2 100644 --- a/js/src/vm/Debugger.cpp +++ b/js/src/vm/Debugger.cpp @@ -12395,7 +12395,7 @@ namespace dbg { // reasons this data is stored and replicated on each slice. Each // slice used to have its own GCReason, but now they are all the // same. - data->reason = gcreason::ExplainReason(slice.reason); + data->reason = ExplainGCReason(slice.reason); MOZ_ASSERT(data->reason); } diff --git a/js/src/vm/JSContext-inl.h b/js/src/vm/JSContext-inl.h index bbd112004e413..c960b5405335a 100644 --- a/js/src/vm/JSContext-inl.h +++ b/js/src/vm/JSContext-inl.h @@ -305,7 +305,7 @@ inline js::LifoAlloc& JSContext::typeLifoAlloc() { inline js::Nursery& JSContext::nursery() { return runtime()->gc.nursery(); } -inline void JSContext::minorGC(JS::gcreason::Reason reason) { +inline void JSContext::minorGC(JS::GCReason reason) { runtime()->gc.minorGC(reason); } diff --git a/js/src/vm/JSContext.h b/js/src/vm/JSContext.h index 7be7d3fc2203e..cdc2fccc3893a 100644 --- a/js/src/vm/JSContext.h +++ b/js/src/vm/JSContext.h @@ -740,7 +740,7 @@ struct JSContext : public JS::RootingContext, AllowCrossRealm allowCrossRealm = AllowCrossRealm::DontAllow) const; inline js::Nursery& nursery(); - inline void minorGC(JS::gcreason::Reason reason); + inline void minorGC(JS::GCReason reason); public: bool isExceptionPending() const { return throwing; } diff --git a/js/src/vm/Runtime.cpp b/js/src/vm/Runtime.cpp index c49e23bc763df..ff57d570abf65 100644 --- a/js/src/vm/Runtime.cpp +++ b/js/src/vm/Runtime.cpp @@ -280,7 +280,7 @@ void JSRuntime::destroyRuntime() { profilingScripts = false; JS::PrepareForFullGC(cx); - gc.gc(GC_NORMAL, JS::gcreason::DESTROY_RUNTIME); + gc.gc(GC_NORMAL, JS::GCReason::DESTROY_RUNTIME); } AutoNoteSingleThreadedRegion anstr; diff --git a/js/src/vm/Shape-inl.h b/js/src/vm/Shape-inl.h index 94a710743fe29..f233243d4aa2a 100644 --- a/js/src/vm/Shape-inl.h +++ b/js/src/vm/Shape-inl.h @@ -151,7 +151,7 @@ static inline void GetterSetterWriteBarrierPost(AccessorShape* shape) { if (nurseryShapes.length() == 1) { sb->putGeneric(NurseryShapesRef(shape->zone())); } else if (nurseryShapes.length() == MaxShapeVectorLength) { - sb->setAboutToOverflow(JS::gcreason::FULL_SHAPE_BUFFER); + sb->setAboutToOverflow(JS::GCReason::FULL_SHAPE_BUFFER); } } diff --git a/js/xpconnect/src/XPCComponents.cpp b/js/xpconnect/src/XPCComponents.cpp index 378f3a365f5b6..0744eb6c6b26e 100644 --- a/js/xpconnect/src/XPCComponents.cpp +++ b/js/xpconnect/src/XPCComponents.cpp @@ -1637,7 +1637,7 @@ NS_IMETHODIMP nsXPCComponents_Utils::ForceGC() { JSContext* cx = XPCJSContext::Get()->Context(); PrepareForFullGC(cx); - NonIncrementalGC(cx, GC_NORMAL, gcreason::COMPONENT_UTILS); + NonIncrementalGC(cx, GC_NORMAL, GCReason::COMPONENT_UTILS); return NS_OK; } @@ -1683,7 +1683,7 @@ NS_IMETHODIMP nsXPCComponents_Utils::ForceShrinkingGC() { JSContext* cx = dom::danger::GetJSContext(); PrepareForFullGC(cx); - NonIncrementalGC(cx, GC_SHRINK, gcreason::COMPONENT_UTILS); + NonIncrementalGC(cx, GC_SHRINK, GCReason::COMPONENT_UTILS); return NS_OK; } @@ -1696,7 +1696,7 @@ class PreciseGCRunnable : public Runnable { NS_IMETHOD Run() override { nsJSContext::GarbageCollectNow( - gcreason::COMPONENT_UTILS, nsJSContext::NonIncrementalGC, + GCReason::COMPONENT_UTILS, nsJSContext::NonIncrementalGC, mShrinking ? nsJSContext::ShrinkingGC : nsJSContext::NonShrinkingGC); mCallback->Callback(); diff --git a/js/xpconnect/src/nsXPConnect.cpp b/js/xpconnect/src/nsXPConnect.cpp index a97ddf81c6386..a9c2b01a0357b 100644 --- a/js/xpconnect/src/nsXPConnect.cpp +++ b/js/xpconnect/src/nsXPConnect.cpp @@ -91,7 +91,7 @@ nsXPConnect::~nsXPConnect() { // XPConnect, to clean the stuff we forcibly disconnected. The forced // shutdown code defaults to leaking in a number of situations, so we can't // get by with only the second GC. :-( - mRuntime->GarbageCollect(JS::gcreason::XPCONNECT_SHUTDOWN); + mRuntime->GarbageCollect(JS::GCReason::XPCONNECT_SHUTDOWN); mShuttingDown = true; XPCWrappedNativeScope::SystemIsBeingShutDown(); @@ -101,7 +101,7 @@ nsXPConnect::~nsXPConnect() { // after which point we need to GC to clean everything up. We need to do // this before deleting the XPCJSContext, because doing so destroys the // maps that our finalize callback depends on. - mRuntime->GarbageCollect(JS::gcreason::XPCONNECT_SHUTDOWN); + mRuntime->GarbageCollect(JS::GCReason::XPCONNECT_SHUTDOWN); NS_RELEASE(gSystemPrincipal); gScriptSecurityManager = nullptr; diff --git a/layout/base/nsDocumentViewer.cpp b/layout/base/nsDocumentViewer.cpp index 7f84adf8ff035..ddec1f0a96efc 100644 --- a/layout/base/nsDocumentViewer.cpp +++ b/layout/base/nsDocumentViewer.cpp @@ -1156,7 +1156,7 @@ nsDocumentViewer::LoadComplete(nsresult aStatus) { // It's probably a good idea to GC soon since we have finished loading. nsJSContext::PokeGC( - JS::gcreason::LOAD_END, + JS::GCReason::LOAD_END, mDocument ? mDocument->GetWrapperPreserveColor() : nullptr); #ifdef NS_PRINTING @@ -1412,7 +1412,7 @@ nsDocumentViewer::PageHide(bool aIsUnload) { if (aIsUnload) { // Poke the GC. The window might be collectable garbage now. - nsJSContext::PokeGC(JS::gcreason::PAGE_HIDE, + nsJSContext::PokeGC(JS::GCReason::PAGE_HIDE, mDocument->GetWrapperPreserveColor(), NS_GC_DELAY * 2); } @@ -2361,7 +2361,7 @@ UniquePtr nsDocumentViewer::CreateStyleSet(Document* aDocument) { NS_IMETHODIMP nsDocumentViewer::ClearHistoryEntry() { if (mDocument) { - nsJSContext::PokeGC(JS::gcreason::PAGE_HIDE, + nsJSContext::PokeGC(JS::GCReason::PAGE_HIDE, mDocument->GetWrapperPreserveColor(), NS_GC_DELAY * 2); } diff --git a/parser/html/nsHtml5StreamParser.cpp b/parser/html/nsHtml5StreamParser.cpp index 1903af6ac8f48..031caeacdf3be 100644 --- a/parser/html/nsHtml5StreamParser.cpp +++ b/parser/html/nsHtml5StreamParser.cpp @@ -863,7 +863,7 @@ class MaybeRunCollector : public Runnable { NS_IMETHOD Run() override { nsJSContext::MaybeRunNextCollectorSlice(mDocShell, - JS::gcreason::HTML_PARSER); + JS::GCReason::HTML_PARSER); return NS_OK; } diff --git a/toolkit/components/telemetry/build_scripts/mozparsers/parse_histograms.py b/toolkit/components/telemetry/build_scripts/mozparsers/parse_histograms.py index 25265fd5a5337..fb234ffa4e7d0 100755 --- a/toolkit/components/telemetry/build_scripts/mozparsers/parse_histograms.py +++ b/toolkit/components/telemetry/build_scripts/mozparsers/parse_histograms.py @@ -547,7 +547,7 @@ def check_field_types(self, name, definition): if not self._strict_type_checks: # This handles some old non-numeric expressions. EXPRESSIONS = { - "JS::gcreason::NUM_TELEMETRY_REASONS": 101, + "JS::GCReason::NUM_TELEMETRY_REASONS": 101, "mozilla::StartupTimeline::MAX_EVENT_ID": 12, } diff --git a/toolkit/components/telemetry/core/TelemetryHistogram.cpp b/toolkit/components/telemetry/core/TelemetryHistogram.cpp index 6b980590ee0d6..456a06da0f8e7 100644 --- a/toolkit/components/telemetry/core/TelemetryHistogram.cpp +++ b/toolkit/components/telemetry/core/TelemetryHistogram.cpp @@ -2342,11 +2342,11 @@ void TelemetryHistogram::InitializeGlobalState(bool canRecordBase, // We add static asserts here for those values to match so that future changes // don't go unnoticed. // clang-format off - static_assert((JS::gcreason::NUM_TELEMETRY_REASONS + 1) == + static_assert((uint32_t(JS::GCReason::NUM_TELEMETRY_REASONS) + 1) == gHistogramInfos[mozilla::Telemetry::GC_MINOR_REASON].bucketCount && - (JS::gcreason::NUM_TELEMETRY_REASONS + 1) == + (uint32_t(JS::GCReason::NUM_TELEMETRY_REASONS) + 1) == gHistogramInfos[mozilla::Telemetry::GC_MINOR_REASON_LONG].bucketCount && - (JS::gcreason::NUM_TELEMETRY_REASONS + 1) == + (uint32_t(JS::GCReason::NUM_TELEMETRY_REASONS) + 1) == gHistogramInfos[mozilla::Telemetry::GC_REASON_2].bucketCount, "NUM_TELEMETRY_REASONS is assumed to be a fixed value in Histograms.json." " If this was an intentional change, update the n_values for the " diff --git a/toolkit/components/telemetry/tests/python/test_histogramtools_non_strict.py b/toolkit/components/telemetry/tests/python/test_histogramtools_non_strict.py index c83663de9e8df..8c68ce24315b5 100644 --- a/toolkit/components/telemetry/tests/python/test_histogramtools_non_strict.py +++ b/toolkit/components/telemetry/tests/python/test_histogramtools_non_strict.py @@ -52,7 +52,7 @@ def test_non_numeric_expressions(self): "TEST_NON_NUMERIC_HISTOGRAM": { "kind": "linear", "description": "sample", - "n_buckets": "JS::gcreason::NUM_TELEMETRY_REASONS", + "n_buckets": "JS::GCReason::NUM_TELEMETRY_REASONS", "high": "mozilla::StartupTimeline::MAX_EVENT_ID" }} diff --git a/xpcom/base/CycleCollectedJSRuntime.cpp b/xpcom/base/CycleCollectedJSRuntime.cpp index 4f3afcc9d5dc9..5630ffde96022 100644 --- a/xpcom/base/CycleCollectedJSRuntime.cpp +++ b/xpcom/base/CycleCollectedJSRuntime.cpp @@ -814,12 +814,12 @@ void CycleCollectedJSRuntime::TraverseNativeRoots( if (aProgress == JS::GC_CYCLE_END && JS::dbg::FireOnGarbageCollectionHookRequired(aContext)) { - JS::gcreason::Reason reason = aDesc.reason_; + JS::GCReason reason = aDesc.reason_; Unused << NS_WARN_IF( NS_FAILED(DebuggerOnGCRunnable::Enqueue(aContext, aDesc)) && - reason != JS::gcreason::SHUTDOWN_CC && - reason != JS::gcreason::DESTROY_RUNTIME && - reason != JS::gcreason::XPCONNECT_SHUTDOWN); + reason != JS::GCReason::SHUTDOWN_CC && + reason != JS::GCReason::DESTROY_RUNTIME && + reason != JS::GCReason::XPCONNECT_SHUTDOWN); } if (self->mPrevGCSliceCallback) { @@ -829,17 +829,17 @@ void CycleCollectedJSRuntime::TraverseNativeRoots( class MinorGCMarker : public TimelineMarker { private: - JS::gcreason::Reason mReason; + JS::GCReason mReason; public: - MinorGCMarker(MarkerTracingType aTracingType, JS::gcreason::Reason aReason) + MinorGCMarker(MarkerTracingType aTracingType, JS::GCReason aReason) : TimelineMarker("MinorGC", aTracingType, MarkerStackRequest::NO_STACK), mReason(aReason) { MOZ_ASSERT(aTracingType == MarkerTracingType::START || aTracingType == MarkerTracingType::END); } - MinorGCMarker(JS::GCNurseryProgress aProgress, JS::gcreason::Reason aReason) + MinorGCMarker(JS::GCNurseryProgress aProgress, JS::GCReason aReason) : TimelineMarker( "MinorGC", aProgress == JS::GCNurseryProgress::GC_NURSERY_COLLECTION_START @@ -853,7 +853,7 @@ class MinorGCMarker : public TimelineMarker { TimelineMarker::AddDetails(aCx, aMarker); if (GetTracingType() == MarkerTracingType::START) { - auto reason = JS::gcreason::ExplainReason(mReason); + auto reason = JS::ExplainGCReason(mReason); aMarker.mCauseName.Construct(NS_ConvertUTF8toUTF16(reason)); } } @@ -867,7 +867,7 @@ class MinorGCMarker : public TimelineMarker { /* static */ void CycleCollectedJSRuntime::GCNurseryCollectionCallback( JSContext* aContext, JS::GCNurseryProgress aProgress, - JS::gcreason::Reason aReason) { + JS::GCReason aReason) { CycleCollectedJSRuntime* self = CycleCollectedJSRuntime::Get(); MOZ_ASSERT(CycleCollectedJSContext::Get()->Context() == aContext); MOZ_ASSERT(NS_IsMainThread()); @@ -1125,13 +1125,10 @@ bool CycleCollectedJSRuntime::AreGCGrayBitsValid() const { return js::AreGCGrayBitsValid(mJSRuntime); } -void CycleCollectedJSRuntime::GarbageCollect(uint32_t aReason) const { - MOZ_ASSERT(aReason < JS::gcreason::NUM_REASONS); - JS::gcreason::Reason gcreason = static_cast(aReason); - +void CycleCollectedJSRuntime::GarbageCollect(JS::GCReason aReason) const { JSContext* cx = CycleCollectedJSContext::Get()->Context(); JS::PrepareForFullGC(cx); - JS::NonIncrementalGC(cx, GC_NORMAL, gcreason); + JS::NonIncrementalGC(cx, GC_NORMAL, aReason); } void CycleCollectedJSRuntime::JSObjectsTenured() { diff --git a/xpcom/base/CycleCollectedJSRuntime.h b/xpcom/base/CycleCollectedJSRuntime.h index 9314d17690008..9ad9f42cb68da 100644 --- a/xpcom/base/CycleCollectedJSRuntime.h +++ b/xpcom/base/CycleCollectedJSRuntime.h @@ -157,7 +157,7 @@ class CycleCollectedJSRuntime { const JS::GCDescription& aDesc); static void GCNurseryCollectionCallback(JSContext* aContext, JS::GCNurseryProgress aProgress, - JS::gcreason::Reason aReason); + JS::GCReason aReason); static void OutOfMemoryCallback(JSContext* aContext, void* aData); /** * Callback for reporting external string memory. @@ -264,7 +264,7 @@ class CycleCollectedJSRuntime { void FixWeakMappingGrayBits() const; void CheckGrayBits() const; bool AreGCGrayBitsValid() const; - void GarbageCollect(uint32_t aReason) const; + void GarbageCollect(JS::GCReason aReason) const; // This needs to be an nsWrapperCache, not a JSObject, because we need to know // when our object gets moved. But we can't trace it (and hence update our diff --git a/xpcom/base/nsCycleCollector.cpp b/xpcom/base/nsCycleCollector.cpp index 9a5e166c5a7ac..13d84d59f60a1 100644 --- a/xpcom/base/nsCycleCollector.cpp +++ b/xpcom/base/nsCycleCollector.cpp @@ -3272,8 +3272,8 @@ void nsCycleCollector::FixGrayBits(bool aForceGC, TimeLog& aTimeLog) { uint32_t count = 0; do { - mCCJSRuntime->GarbageCollect(aForceGC ? JS::gcreason::SHUTDOWN_CC - : JS::gcreason::CC_FORCED); + mCCJSRuntime->GarbageCollect(aForceGC ? JS::GCReason::SHUTDOWN_CC + : JS::GCReason::CC_FORCED); mCCJSRuntime->FixWeakMappingGrayBits(); @@ -3296,7 +3296,7 @@ void nsCycleCollector::FinishAnyIncrementalGCInProgress() { NS_WARNING("Finishing incremental GC in progress during CC"); JSContext* cx = CycleCollectedJSContext::Get()->Context(); JS::PrepareForIncrementalGC(cx); - JS::FinishIncrementalGC(cx, JS::gcreason::CC_FORCED); + JS::FinishIncrementalGC(cx, JS::GCReason::CC_FORCED); } } From 25158c7ea25cea92de27c87d0206a17a77dca856 Mon Sep 17 00:00:00 2001 From: Olli Pettay Date: Mon, 21 Jan 2019 16:03:03 +0200 Subject: [PATCH 8/9] Bug 1521334, make parser to flush its tree operation queue sooner, r=hsivonen --- modules/libpref/init/StaticPrefList.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/libpref/init/StaticPrefList.h b/modules/libpref/init/StaticPrefList.h index a6e765b693c7e..cbbdb573e4108 100644 --- a/modules/libpref/init/StaticPrefList.h +++ b/modules/libpref/init/StaticPrefList.h @@ -561,7 +561,7 @@ VARCACHE_PREF( VARCACHE_PREF( "html5.flushtimer.initialdelay", html5_flushtimer_initialdelay, - RelaxedAtomicInt32, 120 + RelaxedAtomicInt32, 16 ) // Time in milliseconds between the time a network buffer is seen and the timer @@ -569,7 +569,7 @@ VARCACHE_PREF( VARCACHE_PREF( "html5.flushtimer.subsequentdelay", html5_flushtimer_subsequentdelay, - RelaxedAtomicInt32, 120 + RelaxedAtomicInt32, 16 ) //--------------------------------------------------------------------------- From d86ab4b7948fa3afe00f9707a036b5077dc6d36b Mon Sep 17 00:00:00 2001 From: Jon Coppeard Date: Mon, 21 Jan 2019 14:26:24 +0000 Subject: [PATCH 9/9] Bug 1518075 - Fix rooting hazard r=me on a CLOSED TREE --- dom/base/nsGlobalWindowInner.cpp | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/dom/base/nsGlobalWindowInner.cpp b/dom/base/nsGlobalWindowInner.cpp index 4e4211c131b2a..fc6690d867ebd 100644 --- a/dom/base/nsGlobalWindowInner.cpp +++ b/dom/base/nsGlobalWindowInner.cpp @@ -6006,7 +6006,7 @@ bool nsGlobalWindowInner::RunTimeoutHandler(Timeout* aTimeout, nsJSUtils::ExecutionContext exec(aes.cx(), global); rv = exec.Compile(options, handler->GetHandlerText()); - JSScript* script = exec.MaybeGetScript(); + JS::Rooted script(aes.cx(), exec.MaybeGetScript()); if (script) { LoadedScript* initiatingScript = handler->GetInitiatingScript(); if (initiatingScript) {